Living Without a Moral Code

Some do not think that morality exists. Others have chosen a life of sensual beauty instead of morality: aesthetics over ethics. Still others despise morality, seeing it as an impediment to their own domination of others.

I am in a rather odd position. I think that moral imperatives are real and knowable, but as it happens I know almost none of them. So, I don’t know how to live a moral life. I am morally bound but morally blind. I’m driving my life forward at 100mph but I don’t know which direction to turn it.

Let me explain.

I spend most my time on moral theory. Why? Because if I have the wrong theory, then all of my conclusions in applied ethics are unfounded. So I need to make sure I have the right theory before I can answer questions in applied ethics.

I think I may have found the right theory: desire utilitarianism. Unfortunately, this theory does not let me answer moral questions by closing my eyes and asking my “conscience.” Nor does it have any easy answers to any moral questions.

Instead, desire utilitarianism says that moral imperatives can only be known by way of calculations involving billions of (mostly) unknown variables: desires, strengths of desires, relations between desires and states of affairs, and relations between desires and other desires.

Oofta. Can’t I have a moral theory that is a bit more… practical?

Unfortunately, all other moral theories have turned out to be false.

Darn.

So, I’m still researching moral theory, and it may be years or decades before I can turn my eye to questions of applied ethics.

Which means I’ll be living without a moral code for a long time.

Which wouldn’t be a big deal, except that I want to be moral very badly. That’s why I spend so much time studying ethics in the first place!

So I’m stuck. I want to be moral, but I’m not sure what is moral, if I’m living morally right now, or when I’ll get around to figuring out what is moral! I don’t know if I’m a good person.

Now, I don’t mean to overstate this. I’ve got some good guesses about what is moral and not moral, based on the theory of morality that seems most true to me. But they’re really just guesses.

But I can’t just stop living. I have to make decisions every day. Thousands of them. I can’t calculate the morality of each one – or really, hardly any of them. Not yet, anyway. So what do I do?

What I am doing is a hybrid of (1) my best guesses as to what is moral according to desire utilitarianism, and (2) just ‘going with the flow’ on the many issues for which I cannot even guess at their moral implications.

For example: I have a fairly well-considered guess that truth has much moral value, so I promote truth and successful truth-seeking whatever I do. But when I’m offered lots of money to work for a company where I’d be less productive and less influential than I am now – do I take it? I had no idea how to perform this moral calculation. So I decided to ‘go with the flow.’ I couldn’t see any major harm done either way, but there were some practical considerations against taking the money, so I declined the offer.

It’s a pretty weird place to be, honestly. But that’s what happens when it matters to you if your beliefs are true. You suddenly don’t know a lot of stuff you might have once thought you knew.

Really? It looks like no one could live a moral life if we have to do research to find it.
How about this….Morality is just a human word and human idea that is subjective and individual. Humans agree on shit from time to time and you can see this never ending process of defining right and wrong in courtroom, parenting, schooling, traditions, etc. There we go – all done – the truth on morality in a box for you.

Sounds like a nice extremist mess you’ve cooked up for yourself. Perhaps you shouldn’t have rejected outright all of your background knowledge and intuitions in successfully navigating through the norms of human desire. That’s what you were doing all those years as a Christian (and beyond), right? Wouldn’t that be an accurate assessment even if there’s lots of “moral pollution” in there to talk about as well? Someone with “common sense” would know there is at least limited merit to any moral theory and that absolutely false and absolutely true aren’t as important as merely being able to refine (and yes, reject on occasion) your sensibilities over time. If you pay disjointed apologetic lip service to these observations, it’s clear you aren’t actually applying them in the exclusionary way they should be applied. At the expense of the bad method. Not including it along for the ride.

When I learned about “desire utilitarianism” from your podcasts I rejoiced. It was a much more articulate version of what I already believed. But I hadn’t done the asinine thing you’ve done and thrown absolutely everything out before I got there. What if it had been 10 years before I stumbled upon it? Should I have been amoral all of that time? With hundreds of thousands of important decisions to make in the meantime? Hardly. Just can’t imagine why anyone in their right mind would follow in your footsteps. You show a lot of promise in a lot of other areas, that I admire. I just don’t know what you are thinking here.
But suit yourself,
Ben

Well, this is true for all utilitarians, isn’t it?
The human brain is not an omniscient supercomputer and is not able to calculate the consequences of every single action (and that would be required to be perfectly moral in any utilitarian system). Also, if we were omniscient we would spend most of our time doing really really bizarre stuff, for we would be able to see the consequences of all our actions, no matter how far fetched. And opening that packet of flowers in the hallway lying in wait for your neighbour and plucking out all the lilies just might have saved her some grief in 10 years.

The reality of the situation is that you’re never perfectly moral in any system of utilitarianism since you don’t know every aspect of the future. You can just do your best and make educated guesses. Think things over in terms of consequences of your actions and spend more time musing on matters of great importance and less time on matters of less importance.
And I would call that moral, for what more can we ask of a person but his or her best effort to be moral?

<blockquote> I want to be moral very badly. That’s why I spend so much time studying ethics in the first place!</blockquote>
Well, if a bright guy like you can’t figure it out, think of all of us out there fumbling. So your calculations should keep in mind that even if you figure it out for yourself, you won’t be able to influence the minions — they just keep pluggin around like you are not. So, since you are sure you will keep doing just fine without a systematic approach, I doubt your self-introspection. I doubt that you are studying ethics so you can be better. I think you study because that is the sort of organism you are. The rest is grand rationalizations — albeit damn good ones.

There is indeed a heavy “cost” to rejecting moral intuition as a source of knowledge. The problem is that moral intuition is an unwarranted source of knowledge. You seem to be giving an argument from consequences, rather than giving me some good reason to trust out moral intuitions.

As for refining our moral views: But are we refining them in the right direction? To know this, we’d have to have a reliable source of knowledge about moral value, and intuition is NOT that source.

What if it had been 10 years before I stumbled upon it? Should I have been amoral all of that time?

You seem to be confusing the problem of living without moral knowledge and living without moral certainty. If you stick with an incorrect theory you might feel like you’re moral, but that doesn’t mean you are.

Ben, you seem very concerned to feel like you are moral, and to live in a way that gives you practical benefit. I am more concerned with truth. If there are moral facts, then I want to know which ones are true, not which ones conform to my inborn prejudices or which ones are most easily applicable.

Some clarification: By false I presume you mean they are not perfectly correct all the time. That doesn’t mean they are useless, or that they are dead wrong. Consider Newton’s laws of motion. They are “false” in that they do not accurately apply in all situations, particularly at the extremes of mass and velocity. But they still work plenty fine for most every day matters, and indeed, they are still taught in the schools. In morality, for comparison, the Golden Rule is not a perfect theory, but it’s a darn fine start.
My view is that morality is a pragmatic necessity for building societies, and that this is why attempts to fit ideologically pure theories on it ex post facto generally fail.

There’s the rub – what is important for your actions is not what desire utilitarianism prescribes per se, but rather what you want. But the above sentence seems to me to display a lack of self-reflection. What is it precisely, that you want? “To be moral” is such a generic, almost meaningless, answer. I suggest that if you will look well and deeply inside yourself, you will find that desire utilitarianism is not precisely what you want.

And I suggest that looking deeply into oneself is the heart and beginning of all true moral theories. Theories that prescribe some abstract universal solutions are merely putting one desire, the theories’ summum bonum, above your own’s, which is irrational for any agent to accept. Instead, moral theory is about coming to know yourself, in light of the scientifically established human nature in all its variants, and developing the mental tools to help further reveal and express your deep and dominant desires.

I suspect your insistence on seeking universally valid moral truths is an atavistic remnant of your Christian upbringing. It’s time you grow beyond (cosmic) good and evil, to embrace your humanity.

Even if you find that Desire Utilitarianism is a coherent system with laws far-reaching enough to give a clear answer to any potential moral decision, and even if you could do the impossible, and apply this system fully to every decision, taking correctly into account all the billions of variables, you would still merely have a set of good and bad actions under a specific system. Are you not still stuck with not knowing if DU has authority? How does DU validate itself from outside itself?

There is indeed a heavy “cost” to rejecting moral intuition as a source of knowledge. The problem is that moral intuition is an unwarranted source of knowledge.

This is probably true, but seems to be dismissive of moral intuition entirely. I believe even Alonzo Fyfe has talked about the merit of moral intuition. Specifically, to the extent we get our intuition from our parents, family, and our society, to that extent our desires have been shaped by them and therefore reflect what is “good”. It’s definitely not the best way to be moral, but it’s not entirely useless either.

There’s the rub – what is important for your actions is not what desire utilitarianism prescribes per se, but rather what you want. But the above sentence seems to me to display a lack of self-reflection. What is it precisely, that you want? “To be moral” is such a generic, almost meaningless, answer. I suggest that if you will look well and deeply inside yourself, you will find that desire utilitarianism is not precisely what you want.

If Luke is like me, then he probably wants to live a good life, and make the world a better place. That’s why I care about morality. That’s what is appealing about DU — it provides a framework for discovering what is “good” and “better”. We need not be moral nihilists, or moral relativists seeking only our own hedonistic desires. Neither do we need to be ascetics, giving up all of our desires. Instead, we can allow our desires to be changed to live in harmony with everyone else’s desires, too.

Kip: If Luke is like me, then he probably wants to live a good life, and make the world a better place.

These are empty words without context. Why do you think DU provides the right context? Is DU what you really desire? I don’t think it is. I don’t think there ever was a creature that wanted it, and likely there won’t be unless someone intentionally makes one.

Do you really want to evaluate all desires equally, without regards to their content, except how harmonious is the desire in a perfect world? How about how harmonious is the desire in this world? And why not be true to yourself, and value the things you desire even if they’re not perfectly harmonic in a perfect world?

Do you really want to evaluate all desires equally, without regards to who holds them? Don’t the desires of your loved ones count for more? [Not saying that others' desires should be disregarded, here!]

Do you really want to change your desires to erase your self? DU teaches you to forge yourself into an agent that will, in a perfect world filled with identical agents, provide maximal total desire fulfillment. Why erase yourself so? If truth is not valuable in that setting (and I suspect it isn’t…), should Luke abandon his desire to live by it? Why does that ideal world has to do with the fulfillment of your own desires?

That’s why I care about morality. That’s what is appealing about DU — it provides a framework for discovering what is “good” and “better”. We need not be moral nihilists, or moral relativists seeking only our own hedonistic desires.

Who said relativists seek only to fulfill their hedonistic desires? I think that people are generally not aware of the extent altruistic desires motivate them.

Neither do we need to be ascetics, giving up all of our desires. Instead, we can allow our desires to be changed to live in harmony with everyone else’s desires, too.

That doesn’t strike me as a good reason to betray your current desires. You are saying that you would like to replace your current desires instead of fulfill them, which isn’t a rational thing to do.

Luke: “The problem is that moral intuition is an unwarranted source of knowledge. “

No, actually our moral intuition as I already said comes from our experience as successful navigators of human desire. In other words, I formulated “moral intuition” in terms of desire utilitarianism so you’d have no excuse not to get the idea. But you found an excuse anyway by ignoring what I said. That everyone has at least *some* experience in managing their desire economy successfully and that manifests as intuition in future circumstances. So I gave you good reason to at least partially trust your moral intuitions based off of the very moral theory you claim to advocate. In ADDITION to that, I went into the consequences of such folly for the sake of a balanced picture.

Luke: “But are we refining them in the right direction?”

This question of yours is based off of your misrepresentation of the criticism. For some reason trusting our moral intuition is an all or nothing deal to you. I see no where I advocated the impossibility of correcting our prior sensibilities. In fact, I said the opposite. “…aren’t as important as merely being able to refine (and yes, *reject* on occasion) your sensibilities over time.”

Luke: “You seem to be confusing the problem of living without moral knowledge and living without moral certainty. If you stick with an incorrect theory you might feel like you’re moral, but that doesn’t mean you are.”

Well, you’ve a priori defined all moral knowledge that is not desire utilitarianism as non-moral knowledge in order to justify your extremism. If your new moral theory has absolutely nothing to do with what’s come before (in your own life) or what anyone else is doing around you with other moral theories…I would be incredibly surprised. It just sounds like you need there to be some extraordinary difference no matter how implausible a claim that is on the face of it. All we have to do is get a description of the kinds of moral claims you embrace now and the kinds of claims you used to think were valid. If we can find sufficient commonalities, then I don’t see how you can hope to maintain such cognitive dissonance in the presence of scrutiny. And I don’t see what you are losing by conceding.

Luke: “Ben, you seem very concerned to feel like you are moral, and to live in a way that gives you practical benefit.”

Again, this makes plenty of sense based off of your persistent inability to be square with what’s actually being said to you. When did I advocate not actually wanting to be truly moral? Not what I said, but it must be what you heard. Wanting to feel like a good person who lives a functional life even despite uncertainty should A: not be demonized and B: doesn’t have to be synonymous with being uncorrectable.

Luke: “I am more concerned with truth. If there are moral facts, then I want to know which ones are true…”

As I just said above, there’s no reason a measure of pragmatism is at odds with finding out which moral facts are true. And I’ve said this from the beginning. Just because you can’t find that balance in yourself, doesn’t mean other people are equally inept.

Luke: “…not which ones conform to my inborn prejudices or which ones are most easily applicable.”

Since when is everyone so well versed in only prejudice and laziness? Sounds like you’ve become prejudiced and lazy in your attempt to avoid it since you’ve characterized everyone in this way and found a lazy way to misrepresent them. Um, epic FAIL?

I also see that in general you are channeling all of this criticism into an inflation of your misguided sense of idealism. “Sorry, cupcake, Dad is just too virtuous to let you use training wheels. Got to get it right the first time!” “Someone called child services on me!?! What!??! But don’t they understand I’m just soo zealous for bicycling facts!?!” Just because we are advocating a balanced and graduated picture, doesn’t mean we are lop-siding things in just the opposite direction towards touchy feelie morality and disconcern for getting things more and more correct in the long run. But you do make the job of rebutting you much easier when you persistently misrepresent the criticism. So by all means, continue.

Even having accepted DU, I can find common ground with just about everyone at every level even if they can’t find it with me. I would have to consider myself morally inept, conceptually challenged, and philosophically snobbish to be unable to do this. And I would never tell them that getting all of their philosophical i’s dotted and t’s crossed is imperative for them making improvement. It is clearly better for people in general to work from where they are rather than pretending to be somewhere they are not.

If you actually have the desire to not do more harm than good, then you have reason to take action in the form of considering that not being practical contributes to being a bad person.

Yair: These are empty words without context. Why do you think DU provides the right context? Is DU what you really desire? I don’t think it is. I don’t think there ever was a creature that wanted it, and likely there won’t be unless someone intentionally makes one. Do you really want to evaluate all desires equally, without regards to their content, except how harmonious is the desire in a perfect world? How about how harmonious is the desire in this world? And why not be true to yourself, and value the things you desire even if they’re not perfectly harmonic in a perfect world?

Yair, wanting to know (for real) what is moral so that I can live in accordance with moral facts is like wanting to know what is true about God so I can live in accordance with those facts.

DU is not exactly what I always wanted morality to be. I don’t want thousands of moral facts to be unknown, pending painstaking research. I don’t WANT to realize that there is no intrinsic value difference between the desire to rape someone and the desire to feed someone. But I do want to know what is moral and act on it, and it just so happens that when I do my research I think it likely that DU characterizes morality properly, and other theories do not.

But in the same way, I didn’t WANT it to be the case that God does not exist. I didn’t WANT it to be the case that no part of me will exist after death. But I did want to know the truth about God and act on it, and as it turns out when I do my research I think it likely that naturalism characterizes reality properly and supernaturalism does not.

Ben: No, actually our moral intuition as I already said comes from our experience as successful navigators of human desire. In other words, I formulated “moral intuition” in terms of desire utilitarianism so you’d have no excuse not to get the idea. But you found an excuse anyway by ignoring what I said. That everyone has at least *some* experience in managing their desire economy successfully and that manifests as intuition in future circumstances. So I gave you good reason to at least partially trust your moral intuitions based off of the very moral theory you claim to advocate.

Yes, I understood all that, and I replied that this does not give us reason to trust our pre-philosophical intuitions about morality. First, our pre-philosophical moral intuitions do not tell us which theory is correct. If it turned out that a God existed or categorical imperatives existed, then our moral intuitions would make many different errors than if desire utilitarianism is correct.

Second, if we assume desire utilitarianism to be true, this still does not give us reason to trust our pre-philosophical intuitions about morality – because our thoughts along this line are no longer pre-philosophical, but post-philosophical ‘educated guesses.’ At this point, our pre-philosophical intuitions will only get in our way. For example, though we have evolved to navigate desire economies, we have been programmed to be extremely concerned about the desires of our immediate tribe, and very little concerned with the desires of people from other tribes or nations, and not at all concerned with the desires of other species. So even here, at the most basic level, our pre-philosophical intuitions will lead us astray.

Ben: In ADDITION to that, I went into the consequences of such folly for the sake of a balanced picture.

And again I say that an argument from consequences does not lead to truth.

Ben: Well, you’ve a priori defined all moral knowledge that is not desire utilitarianism as non-moral knowledge in order to justify your extremism.

This was the only sentence of this paragraph I understood.

There is basically nothing a priori about desire utilitarianism. I have, a posteriori, concluded that moral theories which are built from pre-philosophical intuitions or categorical imperatives or intrinsic values or commands of a god happen to be false. These theories certainly concern morality, so they are not ‘non-moral’, but they do happen to be false.

Ben: As I just said above, there’s no reason a measure of pragmatism is at odds with finding out which moral facts are true. And I’ve said this from the beginning. Just because you can’t find that balance in yourself, doesn’t mean other people are equally inept.

And as I’ve continually said, an argument from consequences is no help in learning the truth about something. Do you really deny this?

Ben: Since when is everyone so well versed in only prejudice and laziness?

I never said any such thing. But I don’t have a problem calling our pre-philosophical moral intuitions “prejudices.”

Ben: Just because we are advocating a balanced and graduated picture, doesn’t mean we are lop-siding things in just the opposite direction towards touchy feelie morality and disconcern for getting things more and more correct in the long run. But you do make the job of rebutting you much easier when you persistently misrepresent the criticism. So by all means, continue.

Ben, I have spent a lot of time defending my views, and I will spend a lot more time in the future. You keep saying that your view is superior because it’s more balanced and less extreme. But I have yet to hear a successful argument for why it happens to be true that your more “balanced” view is true, and my more “extreme” view is not. If the truth about morality actually IS that, for example, pre-philosophical moral intuitions are a somewhat reliable guide to moral truth, then I would like to hear an argument for that, not continuing assertions that my view is “extreme” and therefore your view is superior.

You gave an argument about our experience in desire economies, but I think I rebutted that above. Please tell me why your view of morality is true, rather than just labeling your view to be more “balanced” or “mature.”

Ben: I would never tell them that getting all of their philosophical i’s dotted and t’s crossed is imperative for them making improvement. It is clearly better for people in general to work from where they are rather than pretending to be somewhere they are not.

What do you mean? Do you mean that people can become more moral without doing rigorous philosophical work to find out what is moral? Of course they can – by sheer accident, or by cultural trend, or by religious trend, whatever. I don’t think ancient Jainism had a clue about what was true or false about the universe, but they happened to stumble on an ethical system that is more ethical than that of, say, Islam. People may stumble forward or backward in moral progress if measured against true moral facts, whatever those facts may be.

Can you be more specific about what you mean? Understanding quantum mechanics is hugely difficult, and it’s very impractical to expect scientists to all figure it out perfectly. Does this mean I should advocate that people just take a guess at what’s true about some aspect of quantum mechanics and then act on it? Would I be a bad person to insist that we not claim knowledge about quantum mechanics before doing the proper research, even though quantum mechanics research is very hard?

Alright: It’s called Redesigning Morality, and contains Dennett’s take on ethics (the book continues many loosely-bound topics, so you can understand the chapter pretty well without having to read the whole book). His approach is pragmatist and focuses on meta-rationality. It’s very relevant to your situation because he criticizes all consequentialist moral theories for being fundamentally impractical, and argues rather persuasively that practicality matters for semi-rational beings like us.

Luke: Yes, I understood all that, and I replied that this does not give us reason to trust our pre-philosophical intuitions about morality. First, our pre-philosophical moral intuitions do not tell us which theory is correct. If it turned out that a God existed or categorical imperatives existed, then our moral intuitions would make many different errors than if desire utilitarianism is correct.

The thing that makes those things incorrect is the fact that morality was always connected to primitive notions of desire utilitarianism all along. Morality could only and was only ever about managing a flourishing economy of desires. If morality never had anything to do with what is in our heads, then by definition there was never a reason to care about whatever the hell “morality” was in the first place. It would be free association. Why would we go look for it? Why investigate the world at all so intensely over just any random word that floats our way? When you frame the moral research in a manner that cuts yourself off from your pre-philosophical moral background knowledge, you no longer have a reason to investigate morality. In other words, you are giving other meaningless bullshit options too much credit.

Yet you did pursue moral knowledge anyway, because your moral background knowledge had at least limited merit. The frame of my moral research paradigm accounts for that in a very straight forward way, but yours outright contradicts it. When you pull everything off the table (including your desire to eat), there’s no reason to set the table and so the table never gets set. You didn’t get a memo from heaven to investigate this bizarre thing called morality. And categorical imperatives did not bang on your door with a research proposal. The desire came from within and your understanding of the question (“how do you know you’ve done more good than bad”) came from your background knowledge on the topic. Magically you knew it was the right question to ask. How? Please show me how your explanation accounts for this better than mine, because I don’t think you can account for it consistently at all.

The reason you at least tentatively accepted DU was because it maps out onto real world facts. Since that is the case, there is no reason to not retro-fit that conclusion to all other more primitive executions of desire economy maintenance. Real facts love other real facts, don’t they? Why wouldn’t DU give us proper insight into what other people have been trying to do all along? Every proposed spin on moral theorizing is met with criticism from nowhere but our foggy moral background knowledge and intuitions. “Oh yeah, but how does moral theory x account for y?” In other words we are a group of rookie desire economists trying to figure out what the hell we are doing. Evolved creatures (evolutures) who never had any clue how to maintain their economy of desires in practice obviously didn’t stand as much a chance of being functional individuals (by definition) as other evolutures who did (unless you ask someone like Plantinga, then the head full of beans is a great evolutionary idea). And whether that is philosophically executed with precision or not or even if it has a tribalistic falloff, only such evolutures could ever be expected to look for and discover DU. In other words, if you are correct about DU in the way you think you are, DU is incorrect.

We didn’t get to quantum mechanics by fiat and someone has already dropped the Newton analogy on you to no avail. Apparently you didn’t even understand my “training wheels” analogy or didn’t think it mattered. Maybe you agree with Chuck in response to Reginald, but we aren’t Microsoft computers. In fact, I think you’ve pointed out that Jeff Hawkins has pointed out that we don’t do things like our computers do. We have dirty messy ways of making functional conclusions (heuristics, as I’m sure you are aware) so little errors don’t actually matter as Chuck claimed. He might be properly defending your views, but you are both factually wrong. Therefore your version of defending DU is wrong (if you agree with his defense, and it seems like you would).

Ben: “Since when is everyone so well versed in only prejudice and laziness?
Luke: I never said any such thing. But I don’t have a problem calling our pre-philosophical moral intuitions “prejudices.”

One, you did say such a thing and then you just said it again. You’ve implicated everyone who doesn’t have a developed moral philosophy as being full of prejudice. Are there simply no naturally adept moralists who can’t tell you why they do anything they do, and yet still maintain a flourishing economy of desires? These are people you haven’t even met and that is your prejudiced judgment of their moral abilities. Basically what you’ve said to me is that, “Not, x! X!”

There are a dozen other things I could take issue with that you said, but for whatever reason it doesn’t seem like you are really ready to have this conversation at this point in your life. I do have to bring it up, because I think you are advocating an immoral and prejudiced version of an otherwise enlightening moral theory. You’re an intelligent person who can easily do better with just a touch of self-awareness. I’m sure you’ll be more correctable on other issues.

lukeprog: Do you mean that people can become more moral without doing rigorous philosophical work to find out what is moral? Of course they can – by sheer accident, or by cultural trend, or by religious trend, whatever. I don’t think ancient Jainism had a clue about what was true or false about the universe, but they happened to stumble on an ethical system that is more ethical than that of, say, Islam. People may stumble forward or backward in moral progress if measured against true moral facts, whatever those facts may be.

I see that I missed this chunk of your last comment. It could represent a concession or a contradiction depending on how one looks at it. I’m going to have to assume it’s a contradiction. You did say:

lukeprog: But I don’t have a problem calling our pre-philosophical moral intuitions “prejudices.”

That remains a very broad brush stroke and it’s not quite fair to say the examples you mentioned had no idea what they were doing in moral terms or why. I wouldn’t expect them to disregard all of those lessons learned in previous belief systems as you claim to have done when leaving Christianity. That *is* experience with desire economy maintenance even if it has to be reinterpreted in light of new and better ideas. It’s a literally impossibility to throw it all out and there’s just no reason to even if you could. Far from “getting in the way” it informs every step of your new journey even if a lot of it can be considered “what not to do.”

The reason you at least tentatively accepted DU was because it maps out onto real world facts. Since that is the case, there is no reason to not retro-fit that conclusion to all other more primitive executions of desire economy maintenance.

This is false. A theory does not just explain the facts, it also provides illumination of the facts, which can then be reinterpreted under a better understanding of the world.

lukeprog: The exact same way every other theory validates itself; by proving to be more true about the universe than competing theories.

How would you test it? You can test the truth of the theory of gravity by seeing what happens when you drop something. It is a theory that models a system (not itself) that can be observed. An ethical theory does not have that testability (or am I missing something?) Suppose DU yeilded the following calculation: It is moral to shoot that guy because he has a gun pointed at that child.
So how do I test the truth of that result? Is there any way that a person could look at a calculated objective moral judgment from a system and declare “yep, DU got it right” in the same way a person could say, after seeing an object fall, “yep, that gravity theory got it right” ?

How do you test a moral theory? Exactly like you test any other theory.

There are basically two tests. First is the semantic test, to see if you and I are talking about the same thing. For example, if you say you’re proposing a theory of gravity but it has nothing to do with what people generally mean when they speak of gravity, then I’ll say you might have a nice theory but it has nothing to do with gravity. This is an empirical test, because we can measure how people use terms that have to do with gravity.

The second test is also an empirical test. Does the theory refer to things that exist, and do they behave as the theory describes?

Desire utilitarianism passes these two tests better than any other moral theory, I contend.

lukeprog: For example, if you say you’re proposing a theory of gravity but it has nothing to do with what people generally mean when they speak of gravity, then I’ll say you might have a nice theory but it has nothing to do with gravity. This is an empirical test, because we can measure how people use terms that have to do with gravity.

Not to butt in, but this is a semantic test more than an empirical test. You give definitions within a theory so that people know what you’re talking about–you don’t just adopt the definitions of the general public.

For example, it would be rather silly to dismiss string theory because it doesn’t deal with what people usually mean when they use the term “string”.

Using this as a test for DU seems to me rather like comparing the conclusions of DU with our intuitions about morality to see if DU is true, but you’ve already rejected our moral intuitions. Here’s the question: we test gravity theories by comparing their predictions with measurements of gravity, so what do we compare the predictions/calculations of DU with to test DU?

lukeprog: Ben, I think we’re talking completely past each other, so for now I’ll just stop.

Sounds good to me. I was thinking of perhaps argument mapping this sometime. We’ll see how that goes though. I hope there’s no hard feelings. I like what you are doing in general, just not how this particular issue comes off. And I don’t want to press things if I’m not actually helping (even if I happen to be the one that is mistaken).

Luke, can you give an example of DU passing the empirical test? It would have to look something like this: DU calculates that it is immoral to punt a chihuahua. And since it is in fact immoral to punt chihuahuas, DU passes the test.
But how did we get that second statement? If we already had the answer, why did we need a model in the first place? Or does an empirical test on a moral judgement look vastly different?

Lorkas: Not to butt in, but this is a semantic test more than an empirical test.

Yes, it’s a semantic test that can be conducted empirically, by studying how language is used in the real world.

Lorkas: we test gravity theories by comparing their predictions with measurements of gravity, so what do we compare the predictions/calculations of DU with to test DU?

DU makes the empirical claim that the only reasons for action that exist are desires. This is testable. If God’s commands or categorical imperatives or intrinsic values exist, they would be reasons for action (I’ll write more on this later), but as it so happens we have no evidence that these things exist. We DO have evidence that desires exist (though not necessarily as a ‘natural kind’).

DU makes the empirical claim that desires come in varying strengths. This is testable.

DU makes the empirical claim that certain desires tend to fulfill more and stronger desires than they thwart. Each of these specific claims is testable.

If we are to determine whether it is moral or immoral to punt chihuahuas, we must know what ‘right’ and ‘wrong’ mean. There are many competing theories on what these words refer to. Do they refer to intrinsic values? That is a popular notion, but as it turns out it is false. Many people think these words refer to the commands of God, but as it turns out these commands do not exist. DU offers a theory of what ‘right’ and ‘wrong’ mean that is superior to other theories because it makes TRUE statements about things that exist, while still accounting for the common usage of moral terms.

Ok, wait. We are not talking about the same thing. To clarify, do you now feel that there are in fact objective moral truths, much like the gravitational constant, that are somehow immutably stitched into the universe, and that DU seeks to model this phenomenon? Or is it simply a foregone conclusion that DU should be considered the definition of morality? It seems to me that you favor the latter, because it seems that you take for granted that it is moral to fullfill more desires etc… If you grant that, then sure, it’s testable. But before going on, which is it: DU is a model for the real obective morality, or DU defines objective morality?

Penneyworth: But before going on, which is it: DU is a model for the real obective morality, or DU defines objective morality?

DU is a testable theory that makes predictions about desires, how they exist, and how they interact with states of affairs. This theory is true or false whether or not we think it has anything at all to do with “morality.” However, I think that it ends up looking a lot like it should be called a theory about morality, and also one that is superior to other moral realisms out there because the other moral realisms depend on false premises about intrinsic values or categorical imperatives.

Even if we assume that DU is correct and points to real moral facts, this still doesn’t justify your worries about the difficulty of finding these moral facts. Let me explain. I have been inclined towards preference utilitarianism for quite a while, and I’m still convinced that it’s basically right (I’m quite comfortable assigning intrinsic value to preference satisfaction, but that’s beside the point). I’ve changed my mind about a particular detail since, and I credit Dennett (the chapter referred to above) and an ethics class that I took with this.
Dennett points out that consequentialism requires the projection of possible alternatives and the choice of the best one. The problem with this is not the obvious objection that this projection is bound to be inaccurate. Dennett’s objection (he calls it the Three Mile Island Effect) is that there is no point in time at which you can stop your projection (even if virtually accurate), look back, and evaluate the consequences of a certain action, because that action will always continue to have effects. For example, the Three Mile Island disaster intuitively seems like a very bad thing. But how do we know this? For all we know, the incident might cause us to wake up and care more about the danger of nuclear energy, which will have better effects in the long run. But we’re not yet off the hoock – the spread of safe nuclear power (caused by our reaction to TMI) will have yet more consequences. Thus, even if our difficult calculations were accurate (they won’t ever be, of course), we would have to go on and model the future until the heat death of the universe. This is what I take to be Dennett’s objection, and I think it completely undermines what we may call “ambitious consequentialism,” which your interpretation of DU seems to be a form of.
The next point is that we obviously are very limited in our ability to project future states of the world. This wouldn’t be that much of the problem, if it weren’t for the fact that trying to simply do better and try our best at these calculations may very well be irrational. In the hundreds of ethical decisions that you make in everyday life, not only would it not pay to make these calculations, but it would significantly impair your ability to be ethical, in the sense of actually making the world a better place.
These two points made me realize that my utilitarianism may be a correct abstraction, but that we have no hope of possibly living up to it. My conclusion is that we need to develop (and personally adopt) popular moral systems that are built on the basis of abstract moral theories but are actually viable. My favorite attempt at a viable moral system (though it is probably less viable among the uneducated public) is R.M. Hare’s two-level utilitarianism which assigns different roles for intuitive moral thinking and for critical moral thinking.
In any case, I understand when you may respond that you actually care really badly about the truth about morality. However, once you consider the difficulties for ambitious consequentialism as well as your limitations as a moral agent, I think you ought to make a decision about what you care more about: about real moral facts and your adherence to them (in other words, about your moral purity), or about actually improving the world. This seems to me to be the same choice that deontologists bungle – as Richard Chappell put it, they care more about getting their hands dirty than about improving the state of the world.
All this is perfectly true even if DU is the true abstract moral theory.

Luke,
Help me out here. You call your website (which I very much enjoy) “common sense atheism”. Sam Harris and Shelly Kagan (in his debate with William Lane Craig) describe morality as being about helping/not hurting/maximizing happiness and well-being of conscious animals versus harming/not helping/increasing suffering and diminishing well-being of conscious animals. This sketch certainly appeals to common sense. Why do you seem to be making it more complicated than that?

”
I think I may have found the right theory: desire utilitarianism. Unfortunately, this theory does not let me answer moral questions by closing my eyes and asking my “conscience.” Nor does it have any easy answers to any moral questions.
Instead, desire utilitarianism says that moral imperatives can only be known by way of calculations involving billions of (mostly) unknown variables: desires, strengths of desires, relations between desires and states of affairs, and relations between desires and other desires.”
What an absolutely horrible thing to say!
Of course you know many moral truths. Of course you know that some redneck bullies dragging “some silly faggot” to death behind their cars is an atrocity. Of course you know that blowing up children in a pizza parlor because you don’t like their parent’s government’s foreign is wrong.
I think you know *lots* of moral truths.
Look, I had a painful deconversion from fundamentalist christianity. I know what it’s like to have to say goodbye to a cult. But what I hear from you is just you trading one cult for another. You never say “*I* think moral imperatives can only be known by way of calculations”, you say “DU says this or “according to DU that” or “DU defines morality as this”.
That is the cultist’s way of talking. I think I know it when I hear it. Why don’t you tell us what *you* think instead of what some self-published net-guru thinks?

Antiplastic, how do you know blowing up children is wrong? (I agree it <i>feels</i> wrong, but feelings can’t tell us what <i>is</i> wrong. After all, slavery felt right to a whole lot of people for thousands of years.)

Thanks Luke, that does clear it up for me. We may not really be talking about morality with DU. We are simply talking about an interesting theory that we may choose to attempt to apply to our behavior, because it may feel correct to do so. Of course, I agree with alex that the impracticality of application would likely undermine its potential virtue.
I discovered something awesome I’d like to share! I really enjoyed the Kagan debate, and have been wanting to listen to more of his lectures, and I found a jackpot: on the youtube channel called yalecourses, you can watch entire semesters of lectures on various subjects, and one of them is Kagan’s course about death. What a feast!

Chuck: Antiplastic, how do you know blowing up children is wrong? (I agree it <i>feels</i> wrong, but feelings can’t tell us what <i>is</i> wrong. After all, slavery felt right to a whole lot of people for thousands of years.)

What an odd question. The same way you (hopefully!) do. I imagine myself performing the act, and the thought of my having done it would make me feel guilty unto death. I examine various stories in which someone heroically prevents it from happening, and I find I would like to imagine myself to be that hero, or at the very least, praise and encourage others to be that hero. Then I commit myself to the stance and announce my commitment to others.

This is all any of us can ever do — examine the reasons and come to conclusions that we are more or less willing to maintain in the face of opposition. I can make sense of saying “this is what I think, but I could be wrong” as just expressing one’s openness to some possible future narrative in which the perpetrator is not a villain. But I’m afraid I can’t make out any distinction between holding my strongest and best-supported beliefs to be “true” and holding them to be “really” true. The word “really” just seems to function grammatically as an exclamation mark rather than any substantive conceptual role.

Chuck: Antiplastic, how do you know blowing up children is wrong? (I agree it <i>feels</i> wrong, but feelings can’t tell us what <i>is</i> wrong. After all, slavery felt right to a whole lot of people for thousands of years.)

BLowing up children is wrong because of the harm it causes to them and their families.

Chuck: Antiplastic, how do you know blowing up children is wrong? (I agree it <i>feels</i> wrong, but feelings can’t tell us what <i>is</i> wrong. After all, slavery felt right to a whole lot of people for thousands of years.)

BLowing up children is wrong because of the harm it causes to them and their families.

Why isn’t that a satisfactory answer?

1) What is “harm”?
2) Why is “harm” wrong?
3) Is “harm” always wrong? Or are there exceptions? How do you know which exceptions are valid?

Kip: 1) What is “harm”? 2) Why is “harm” wrong? 3) Is “harm” always wrong? Or are there exceptions? How do you know which exceptions are valid?

There are a variety of ways that people can be harmed: physical damage, mental damage, overriding one’s autonomy . . . I am sure there are philosophers who work on this list.

Why is harm wrong? Help me to understand why you are asking this question. What kind of answer to this question would satisfy you?

Clearly there are exceptions and harm is not always wrong. I am sure there are philosophers working on these exceptions as well. One may not always be able to prospectively know which exceptions are valid, but so what? We can learn morally just as we learn scientifically no? In fact, it seems to me that is exactly what we do.

The principles of help/harm are the very principles that ought to be considered when considering morality. I don’t really know what else maters.

lukeprog: I try not to trust the intuitions of myself or Alonzo Fyfe or anyone else, but instead work things out empirically and logically.

Well, no one I know is *against* trying to be self-consistent, or trying to believe contrary to evidence. But saying that you can’t trust your own conscience to tell you whether Rosa Parks refusing to surrender her seat on a bus was an act of moral courage is absolutely shocking. Do you honestly believe that? Do you honestly believe that everyone prior to five or ten years ago acted randomly with respect to what is moral, since they did not have the benefit of The One True Moralgorithm (Patent Pending)?
I know what it’s like to have a crisis of conscience on an issue. But if someone is seriously claiming they want to ignore conscience itself, this is a symptom of being “in the grip of a theory” at best and needing immediate medical treatment at worst. There’s no denying that deconverting from a fundamentalist mindset can be like having your moral compass remagnetized, and it can be extremely disorienting. As I said, I had my own existential crisis, and for a time I just jumped from one cult (fundamentalist christianity) that wanted to tell me all the answers to another cult (Marxism) that said it could tell me all the answers. What I learned from the experience (better late than never) is I was never going to find out how to be morally fulfilled until I outgrew the need to join some system that tells you all the answers and absolves you of having to make your own decisions. That’s the promise “objective morality” makes but can never deliver on.
But even during my radical swings of moral orientation, I never once doubted whether I would come out on the other side not loving my family and friends, or being pro-slavery, or being a puppy-torturer. Those sorts of things just aren’t at risk when we philosophise.

I do not believe I have a moral “conscience.” What I have are evolved moral prejudices and narrow social conditioning. I’m not sure that people’s actions occur with ‘random’ variance with what IS moral – in fact I doubt it.

Rosa Parks’ may have been an act of great moral courage whether or not she could have given a justification of a particular theory of moral realism that vindicated her action.

You said, “Those sorts of things just aren’t at risk when we philosophise.”

Why?

If philosophy questions all truths, why is there something that is not at risk? Because you feel it to be so?

I do not believe I have a moral “conscience.” What I have are evolved moral prejudices and narrow social conditioning.

TomAYto, toMAHto. What exactly is it you think philosophy in general and this cult in particular are supposed to give you, some “view from nowhere”, sub specie aeternitatis? That the sins of temorality will be washed clean if metaphysics can give you some Archimedean point outside of time? Everyone in the cosmos lives somewhere, and everyone has a past. You hurl pejoratives at your own situatedness in the cosmos (it’s full of “prejudices”, and “narrow”) and this is still the religionist’s and the metaphysician’s attempt to escape reality and history, which are two ways of saying the same thing anyway.I’m not sure that people’s actions occur with ‘random’ variance with what IS moral – in fact I doubt it.
Great! Then this means that your concerns in your post were unwarranted — you do think that there is some correlation between what most people hold to be moral and what “really” is moral. And so your own skeptical doubts should be laid to rest. You really can be confident that most of your moral beliefs are generally right.Rosa Parks’ may have been an act of great moral courage whether or not she could have given a justification of a particular theory of moral realism that vindicated her action.
We’re not talking about what philosophical justifications individuals are supposed to provide. Your initial concerns were that you had no idea whether the vast majority of your moral beliefs are true. By extension, you had no idea whether the vast majority of actions taken by people in the name of justice and liberty and all the rest were actually moral. But when you think about it, were you really doubting this? As I’ve said, I don’t think you were.You said, “Those sorts of things just aren’t at risk when we philosophise.”Why?If philosophy questions all truths, why is there something that is not at risk? Because you feel it to be so?
Because it’s not psychologically practical. Every year in this country, thousands of college freshman in intro-phil courses are introduced to the technique of Cartesian doubt. To my knowledge, not a single one of them has emerged disbelieving they had 10 fingers and 10 toes even though it had always seemed they did, or disbelieving that one needs air to live. I think your beliefs on puppy-kicking don’t need to be preparing themselves for a philosophical fight to the death anytime soon.

Antiplastic: What exactly is it you think philosophy in general and this cult in particular are supposed to give you, some “view from nowhere”, sub specie aeternitatis?

Antiplastic, you have lambasted me for saying that we should not take common moral opinions for granted – something I think would be uncontroversial after a quick glance at our species’ history of racism, sexism, tribalism, nationalism, etc. You use rhetoric to mock my view, but you have not, I think, given me a sound logical or evidential reason for me us to trust our pre-philosophical moral intuitions.

I seek the truth – about the universe, about politics, about morality, about everything. Logic (and philosophy in general) can help me get there. So can science. If you think moral truth is better ascertained by pre-philosophical moral intuitions than by logic and evidence, then please just say so, and give me some reason for thinking this is true.

Antiplastic: Then this means that your concerns in your post were unwarranted — you do think that there is some correlation between what most people hold to be moral and what “really” is moral. And so your own skeptical doubts should be laid to rest. You really can be confident that most of your moral beliefs are generally right.

Even if my moral intuitions tend to often be correct, I would first need to discover moral truth in order to compare my moral intuitions to moral truth so I could know this. Also, I still wouldn’t have a basis for deciding when my moral intuitions are wrong and when they are right.

Antiplastic: you had no idea whether the vast majority of actions taken by people in the name of justice and liberty and all the rest were actually moral. But when you think about it, were you really doubting this?

Yes, and this is, I think, the source of my disagreement. I have been talking about what is true, and you have been talking about what is practical. Those are both important concerns, but they are different concerns.

Antiplastic, you have lambasted me for saying that we should not take common moral opinions for granted – something I think would be uncontroversial after a quick glance at our species’ history of racism, sexism, tribalism, nationalism, etc.
On the contrary — who said anything about “taking them for granted” or any kind of dogmatic assertion that they are all beyond question? There has got to be some middle ground between mindless dogmatism and skeptical despair. What people have been troubled by is your claim not to know whether almost any of them are true. Even in this sentence above, you mention racism, sexism etc. as noncontroversial examples you expect your readers to identify as immoral. So I think you know more than your theory is telling you you’re allowed to say, and hence you feel conflicted.You use rhetoric to mock my view, but you have not, I think, given me a sound logical or evidential reason for me us to trust our pre-philosophical moral intuitions.
Well, one argument is what I just did, namely point out what is called a “performative contradiction”. It turns out you really do believe most of your moral beliefs are true, and that Rosa Parks (and a million other examples) didn’t need a Moralgorithm to operate under the justified belief that what they were doing was right, and therefore you don’t need one to operate under the justified belief that most of them are right. If your “theory” tells you the appropriate response to the civil rights struggle is a sober moral agnosticism, then you need a new theory, plain and simple. But the deeper problem is all your talk about moral “evidences” and “theories” and “truths” that are supposed to exist “out there” in some cosmological sense. Why on earth would you assume that the best way to go about deciding what kind of person you want to become bears any useful analogy to lepidoptery or particle physics, in which what you need to do is just collect facts and facts until you build up an unassailable description of some world independent of human needs and human concerns? Why start from this assumption, instead of approaching morality the way you appreciate a Beethoven piano sonata, or a Dostoevski novel, or approaching virtue like a garden which needs to be nourished, instead of a system which needs “facts” and “evidences”? If you’re worried about unexamined presuppositions polluting your enquiry, you should be worried that you just chose one analogy (“morality is like a descriptive science”) at the outset and stuck with it to the exclusion of other possibilities.
I spent a lot of time in this philosophical cul-de-sac, and I’m trying to give you the advice I wish someone had given me ten years ago. The trick is to stop thinking of “meta”-physics as some kind of “super”-science that’s going to tell you anything about “the really real” or the “truly true”. Let physics be physics and let poetry be poetry. I have this vision of “objectivist metaphysicians” watching To Kill a Mockingbird, graphing calculator in hand, saying “just a minute, and I’ll be able to tell you whether I thought the movie was any good, or whether its lessons on racism ring true.” To take a different moral stance on an issue is just to express a different noncognitive set of attitudes towards it, and to take a different metaethical stance like “objectivity” or “it’s all subjective” is just to express one’s noncognitive attitude towards certain global skeptical challenges to a moral outlook. “Facts” and “theories” and “evidences” don’t really have a place in this choice. It’s a decision you have to make, not one that something outside of you in “objective reality” has already made for you.

I seek the truth – about the universe, about politics, about morality, about everything. Logic (and philosophy in general) can help me get there. So can science. If you think moral truth is better ascertained by pre-philosophical moral intuitions than by logic and evidence, then please just say so, and give me some reason for thinking this is true.
Sorry, false dichotomy. One’s options are not limited to “gut intuition” and pure empirical description. Philosophy can be of great help in making moral decisions and developing moral awareness, insofar as philosophy resembles poetry and literature. And a good reason for thinking some kind of nondescriptivism in metaethics is true is that for every empirical fact, there is a question of what to do about that fact. You can probably name several empirical facts which you or I could not, even through Herculean efforts of will, bring ourselves to feel otherwise about. But can you come up with an empirical fact which in principle is immune to emotive disagreement? No, no one can. It’s not a question of squabbling over whether this or that kind of natural fact is “really” the basis of moral facts. It’s about recognizing that science isn’t a particularly helpful metaphor for what people are trying to do when they are trying to moralize.

Even if my moral intuitions tend to often be correct, I would first need to discover moral truth in order to compare my moral intuitions to moral truth so I could know this.
You’re painting yourself into this objectivist/descriptivist corner in which moral truth just has to be something “out there” beyond the human, beyond experience, and beyond history and contingency. There is no moral truth “out there” waiting to be discovered against which your intuitions can be graded. You can only consult your conscience, defend your vision of the good life in conversation with other people and revise your intuitions when better reasons and better visions appear to you, and seek to learn from other forms of living to see if there’s something you might be missing. You can weave all this into some grand narrative about the moral progress you’ve made in your life, but there simply is no such thing as stepping outside of yourself to find some pre-determined moral truth which will absolve you of your responsibility to think for yourself. Anyone who tells you different is selling something. Yes, I was. And am. Then you *are* saying that you think it’s live possibility that the entire history of the human race has been random with respect to what is moral! But this implicitly contradicts much of what you say elsewhere.

Yes, and this is, I think, the source of my disagreement. I have been talking about what is true, and you have been talking about what is practical. Those are both important concerns, but they are different concerns.
But my point is that they are not different. Morality is, after all, a question of practice. You beg the question against irrealism by assuming moral truth can be meaningfully separated from practical truth in a way that makes it relevantly similar to empirical truth. To accept that something is morally courageous is nothing more and nothing less than to hope you would behave that way if you were in that situation, and to encourage others to behave likewise. You announce your plans. You express your admiration. These are all questions of action. No amount of “gathering facts” is going to get you to a point where the decision about what to commit to has been made for you. That’s the old “theological hangover”.

lukeprog: Exactly the questions I was about to ask, almost word for word!

So you ask, “Why is harm wrong?”
Still hoping someone can explain to me why this question is being asked. Do those asking the question want to be harmed? Does anybody want to be harmed? Is it not one of the most important characteristics of humanity that we are rational enough to recognize that harm is wrong (ie. to consider the reasons for not causing harm)?
It seems to me that cases where harm appears to not be wrong are ones where the harm in question ultimately leads to less harm. It seems obvious that in individual circumstances or cases, things can get complicated (and it seems to me that there may in fact be multiple equally valid moral choices in a given situation rather than just one), but when morality is distilled to its most basic considerations, isn’t it entirely about harm/help? Aren’t people behaving immorally when they do not consider harm/help, and instead, either reject balanced and open-minded considerations of harm/help or follow dogmas that prevent doing so?
So again, while individual cases can become complicated in terms of identifying actions that lead to most help and least harm, can someone please explain to me why morality is really, at its core, any more complicated than this seemingly primary consideration?

Antiplastic: So I think you know more than your theory is telling you you’re allowed to say, and hence you feel conflicted.

I realize my examples are misleading. With those examples I’m appealing to the sentiments of your readers. In effect, I was saying, “Look, even you can’t think our moral intuitions are reliable, because this is where they’ve led us in the past, and surely you don’t think those practices are moral…” But I don’t take it for granted that sexism, racism, etc. are wrong. That’s an empirical question, in my view.

Antiplastic: It turns out you really do believe most of your moral beliefs are true, and that Rosa Parks (and a million other examples) didn’t need a Moralgorithm to operate under the justified belief that what they were doing was right

No, I never said Rosa Parks was justified in believing her action was morally right.

Antiplastic: If your “theory” tells you the appropriate response to the civil rights struggle is a sober moral agnosticism, then you need a new theory, plain and simple.

Again, this is a bald assertion, not an argument. A few centuries ago you probably would have said, “If your ‘theory’ tells you to be morally agnostic about the rights of men to lead the conduct of their wives, then you need a new theory, plain and simple.”

Antiplastic: Why on earth would you assume that the best way to go about deciding what kind of person you want to become bears any useful analogy to lepidoptery or particle physics, in which what you need to do is just collect facts and facts until you build up an unassailable description of some world independent of human needs and human concerns?

I started with no such assumption. In fact, I started by assuming there were no objective moral facts, and that if I was to believe in them I would have to be shown evidence of them. For a long time I assumed this was impossible. And then somebody, Alonzo Fyfe, finally bothered to show some evidence for a theory of moral realism.

Antiplastic: To take a different moral stance on an issue is just to express a different noncognitive set of attitudes towards it, and to take a different metaethical stance like “objectivity” or “it’s all subjective” is just to express one’s noncognitive attitude towards certain global skeptical challenges to a moral outlook.

Now you are starting to assert a position that I can rebut. Rebutting noncognitivism is the first task of my upcoming ‘Defense of Desire Utilitarianism.’

Antiplastic: There is no moral truth “out there” waiting to be discovered against which your intuitions can be graded.

I understand your desire to assert that, but I have given evidence that this assertion is, under certain understandings, false. And I will give a more thorough defense of the theory later.

Antiplastic: Then you *are* saying that you think it’s live possibility that the entire history of the human race has been random with respect to what is moral! But this implicitly contradicts much of what you say elsewhere.

Not true, for I have been saying all over the place that “I’m not sure. I don’t know.” That was the whole message of this post, ‘Living Without a Moral Code.’

Silver Bullet: So you ask, “Why is harm wrong?” Still hoping someone can explain to me why this question is being asked. Do those asking the question want to be harmed? Does anybody want to be harmed? Is it not one of the most important characteristics of humanity that we are rational enough to recognize that harm is wrong (ie. to consider the reasons for not causing harm)?

Yes, why is harm wrong? Because people don’t like to be harmed? But then, why is morality determined by what people like and dislike? And, is it then right to harm the masochist? In what sense is harm wrong? Are plants and animals harmed in the same way as humans? If not, what’s the relevant difference, and how is this choice of what makes the difference not arbitrary?

Or perhaps harm is wrong because acts of harm have negative intrinsic value? But what does that mean, and how would you tell if an act has intrinsic value, negative or positive? What experiment would distinguish these as a natural fact about the world, rather than a question of human opinion?

Hi Luke
I have added a follow up post Desire Types and Tokens addressing what I thought was the most problematic paragraph in your post. However I can see, especially by how the comments are going , that this was not the main issue, nor intended to be, it being rather the (correct) attack on moral intuitionism as the basis for a moral framework or system, realist or otherwise.

lukeprog: I realize my examples are misleading. With those examples I’m appealing to the sentiments of your readers. In effect, I was saying, “Look, even you can’t think our moral intuitions are reliable, because this is where they’ve led us in the past, and surely you don’t think those practices are moral…” But I don’t take it for granted that sexism, racism, etc. are wrong.

At this point we’re just quibbling over labels (“take it for granted” vs. “operate under our most reasonable assumption until someone gives a positive reason for doubt”). And I am singularly unimpressed by the argument from pessimistic induction for the same reason most people are unimpressed by the identical argument as applied to scientific knowledge. Virtually the entire history of medical science prior to the 19th century was the history of the placebo effect. But only crackpot science deniers make arguments like “medicine has been wrong in the past, you you can’t trust doctors.”

That’s an empirical question, in my view.

I have absolutely no idea how it could be an empirical question because I have no idea what empirical evidence for or against it would look like.

No, I never said Rosa Parks was justified in believing her action was morally right. Again, this is a bald assertion, not an argument.

And it’s a “bald assertion” that I have ten fingers and ten toes “because my senses tell me so in a massively consilient fashion”, or that I know what my phone number is, or how to tie my shoes. Just because you can concoct some far-out, hyperbolic philosophical doubt that “because my senses tell me so in a massively consilient fashion” is a good reason to believe things doesn’t mean it’s useful or practical to do so. It’s as unrealistic, psychologically speaking, as trying to *genuinely* doubt that your moral revulsion at watching the protestors in Birmingham or Tehran being fired upon is appropriate.

A few centuries ago you probably would have said, “If your ‘theory’ tells you to be morally agnostic about the rights of men to lead the conduct of their wives, then you need a new theory, plain and simple.”

Again with the pessimistic induction and the impossible demand that someone stand outside of history and contingency and pass judgment on the whole! It can’t be done. You’ll feel better once you abandon this fundamentally theistic urge.

I started with no such assumption. In fact, I started by assuming there were no objective moral facts, and that if I was to believe in them I would have to be shown evidence of them. For a long time I assumed this was impossible.

Well, all your challenges to me so far seem to rely on it in a rather straightforward fashion. I believe you when you say you “started by assuming there were no objective moral facts, and that if I was to believe in them I would have to be shown evidence of them”. Read your last four words. You still assume that moral truths (if they exist) are things “out there” like trees or clouds that it even makes sense to ask for empirical evidence about. That’s the psychological habit I’m trying to draw your attention to.

And then somebody, Alonzo Fyfe, finally bothered to show some evidence for a theory of moral realism.

Now you are starting to assert a position that I can rebut. Rebutting noncognitivism is the first task of my upcoming ‘Defense of Desire Utilitarianism.’ I understand your desire to assert that, but I have given evidence that this assertion is, under certain understandings, false. And I will give a more thorough defense of the theory later.

I’ll take that as an IOU. Of course it’s unreasonable to demand that someone lay out their entire philosophy in a blog comment. But have you given thought to noncognitivism as a position *about* metaethics instead of simply one within it?

Not true, for I have been saying all over the place that “I’m not sure. I don’t know.” That was the whole message of this post, ‘Living Without a Moral Code.’

But you don’t seem to do this consistently, e.g. in your post on republican moral values, your evaluative stance came through rather loud and clear. Do you really not help old ladies across the street, not vote, not have an opinion on the liberation of Iraq etc.? It just seems absolutely bizarre that you are so passionately dedicated to a lifestance which appears to be, by your own insistence here, almost completely devoid of moral content.

lukeprog: Yes, why is harm wrong? Because people don’t like to be harmed? But then, why is morality determined by what people like and dislike? And, is it then right to harm the masochist? In what sense is harm wrong? Are plants and animals harmed in the same way as humans? If not, what’s the relevant difference, and how is this choice of what makes the difference not arbitrary?Or perhaps harm is wrong because acts of harm have negative intrinsic value? But what does that mean, and how would you tell if an act has intrinsic value, negative or positive? What experiment would distinguish these as a natural fact about the world, rather than a question of human opinion?

To consider morality as simply being about what people like and dislike seems overly simplistic, and does not capture the spirit of what I have been suggesting.We can easily consider how what we might like is actually harmful to others or even ourselves. So while it seems to me that, yes, morality is partly about what people like and dislike, it is, in fact, deeper than just that. As described in an earlier post, it entails a consideration of how actions affect the happiness and well-being of other conscious creatures (who can experience happiness & suffering, to the best of our knowledge). Morality exists because we are capable of considering these matters.
Why does harm have to be wrong in a specific “sense”? Harm is damaging, causes suffering, and diminishes the happiness and well-being of others. As Shelly Kagan might say, “Full stop”.

Let me ask you this: what could morality possibly be about if not a consideration of harm versus help, and happiness/well-being versus suffering?

The problem is that it seems like someone can acknowledge the descriptive facts that an action is harmful to others, goes against their wishes, and causes them to suffer, all without ever accepting any moral claim whatsoever. No matter how many descriptive facts one recognizes, that doesn’t add up to a moral evaluation.

Let me ask you this: what could morality possibly be about if not a consideration of harm versus help, and happiness/well-being versus suffering?

This assumes that morality is about something. But perhaps morality is a serious illusion, or perhaps moral evaluations aren’t factual descriptions and thus don’t have any descriptive criteria built into them.

We could go in circles for ages, but I’ll respond to this question. No, they didn’t. Of course, I still have a lot more reading to do, but I know the basic cases made by Brink, Railton, Boyd, and Shafer-Landau, and I think they all fail badly. Most of them argue for moral realism the same way apologists argue for theological realism.

Where is Sayre-McCord’s argument for moral realism? Which publication is that?

antiplastic: Do you really not help old ladies across the street, not vote, not have an opinion on the liberation of Iraq etc.? It just seems absolutely bizarre that you are so passionately dedicated to a lifestance which appears to be, by your own insistence here, almost completely devoid of moral content.

No, I have some guesses on all these things. But I will admit to a great deal of inconsistency on this issue. My understanding of morality has changed so much over the past 12 months that I’m sure my posts are inconsistent.

Silver Bullet: Let me ask you this: what could morality possibly be about if not a consideration of harm versus help, and happiness/well-being versus suffering?

There are dozens of moral systems not based on the assumptions of consequentialism, the intrinsic negative value of harm, and the intrinsic positive value of happiness. For example: divine command theories, Kantian ethics, several forms of contractarianism, several forms of natural and non-natural consequentialism, virtue theories, etc. If you’re going to assert against all these other views that NO, what’s really moral is anything that increases happiness and limits harm, then you’re going to have to actually defend that view with some evidence, rather than just assuming it. Why is well-being the ultimate good, rather than obeying Yahweh or maximizing extropian values or collecting marbles or acting from a pure will or pursuing intellectual virtue?

Silver Bullet: …when morality is distilled to its most basic considerations, isn’t it entirely about harm/help?

I’m sure Luke will get to this in his D.U. posts. In the meantime, I have come across this passage from Hume that it is very insightful:

Take any action allow’d to be vicious: Wilful murder, for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. In which-ever way you take it, you find only certain passions, motives, volitions, and thoughts. There is no other matter of fact in the case. The vice entirely escapes you, as long as you consider the object. You never can find it, till you turn your reflection into your own breast, and find a sentiment of disapprobation, which arises in you, towards this action. Here is a matter of fact; but ’tis the object of feeling, not of reason. It lies in yourself, not in the object. So that when you pronounce any action or character to be vicious, you mean nothing, but that from the constitution of your nature you have a feeling or sentiment of blame from the contemplation of it. … Nothing can be more real, or concern us more, than our own sentiments of pleasure and uneasiness; and if these be favourable to virtue and unfavourable to vice, no more can be requisite to the regulation of our conduct and behaviour.

— from A Treatise of Human Nature, (1739-40), Book III, Part 1, Section 1, p. 468-469, by David Hume

lukeprog: We could go in circles for ages, but I’ll respond to this question. No, they didn’t. Of course, I still have a lot more reading to do, but I know the basic cases made by Brink, Railton, Boyd, and Shafer-Landau, and I think they all fail badly. Most of them argue for moral realism the same way apologists argue for theological realism.

Well, in my view *all* “realisms” about anything are a holdover from a theological mindset, so we may agree more than you think. And I am especially unconvinced by moral realism. But this blanket ad hominem comparing people like Peter Railton to Kent Hovind is waaaay over the top and uncalled for. I can’t imagine which of their arguments you think merit that description.

Where is Sayre-McCord’s argument for moral realism? Which publication is that?

Lots of them. You could try his anthology _Essays in Moral Realism_ for a start. Aside from being a great collection in general, it also has his entry on Explanatory Impotence, which I believe is actually available online.

No, I have some guesses on all these things. But I will admit to a great deal of inconsistency on this issue. My understanding of morality has changed so much over the past 12 months that I’m sure my posts are inconsistent.

To be completely consistent, you’d have to refrain from almost any action without firing up your desireometer and doing an explicit calculation. But since this is not only impossible in practice but impossible in principle, you’ll never be able to fully consistent.

Antiplastic: But this blanket ad hominem comparing people like Peter Railton to Kent Hovind is waaaay over the top and uncalled for. I can’t imagine which of their arguments you think merit that description.

Okay, now you’re just blatantly putting words in my mouth and attacking straw men. I never said any such thing.

lukeprog: Okay, now you’re just blatantly putting words in my mouth and attacking straw men. I never said any such thing.

So when you said “Most of them argue for moral realism the same way apologists argue for theological realism,” you meant to compliment them by comparing them to the intellectually honest apologists with the compelling arguments you find especially persuasive?

1. Apologists argue for theological realism with poor arguments (assuming they have the default position, feeling free to posit exotic entities to explain anything not currently understood, etc.)
2. Moral realists use the same kinds of poor arguments to defend moral realism.

As for comparing moral realists to Kent Hovind, if there are specific comparisons that are accurate, than I could defend them. Most moral realist philosophers are male, for example, and it is true that they share that with Kent Hovind. But I can’t compare moral realist arguments to Hovind’s arguments for theological realism, because I don’t know what Hovind’s arguments are, or if I could even call them ‘arguments.’

lukeprog: If I unpack my statement, it meant:1. Apologists argue for theological realism with poor arguments (assuming they have the default position, feeling free to posit exotic entities to explain anything not currently understood, etc.) 2. Moral realists use the same kinds of poor arguments to defend moral realism.

Since you are familiar enough with Boyd and Brink, could you maybe give a specific example of one of them “positing exotic entities to explain anything not currently understood”?

I’m certainly not familiar with their work in great depth, but as I recall Boyd and Brink do not posit exotic entities. Instead, they rely mostly on arguments that assume the burden of proof is on the skeptic of moral realism, and then try to diminish the usual attacks on moral realism.

Shafer-Landau, on the other hand, certainly posits some exotic and poorly-defined entities.

Lukeprog,
I understand the dilemma you face and its quite reasonable that you’d find it hard to decide the best course of action when you clearly don’t have enough information to go on. I have read your blog for a while and recently started listening to your podcasts CPBD (since I got my Driod, I’ve been listening to them all the time) and have since become quite a fan of Desirism. And while things like rape and theft are, more or less, simple moral calculations (as it would take some strange and unusual circumstances to make those actions moral) other things like animal research is difficult question for me to answer. I recognize that I have a lot of prejudice in my analysis simply from being raised in a typical omnivorous household in a society that placed little value on the desires of animals, much less mice. Its also difficult because I work in cancer research and this directly impacts the research I do on a daily basis. But on the other hand, not doing cancer research could also be an immoral action.