Saturday, January 22, 2011

One more thing, though, for now

I must raise this one before (a) becoming more intermittent for a week or so, and (b) moving to some other subjects for a while. It's relevant to the "counting grains of sand" issue that we were discussing the other day.

On pages 71-73 of The Moral Landscape, Sam briefly discusses the work of Derek Parfit. Quite properly, he notes on page 71 (and explains over the page in more detail) that Parfit has shown how consequentialist theories of morality lead to "troubling paradoxes" whether we are concerned to maximise total units of welfare (or whatever) or average units. He (Sam) refers to this as one of the "practical difficulties for consequentialism", but it's not just a practical difficulty. It's a challenge to the very coherence, or at least the intuitiveness when probed, of consequentialism - or at least of any kind of consequentialism that claims we are objectively required to maximise something (I have a consequentialist approach myself, in a broader sense).

Unless Parfit's paradoxes can be solved, consequentialism, in the relevant sense, is in deep conceptual-theoretical, not just practical trouble. Sam does not claim to be able to solve them and doesn't offer any good grounds for trust that they can be solved.

Perhaps they can be. (My own "solution" is to deny that there are objectively binding moral principles and that we are trying to maximise anything; to base our decisions on a plurality of values; and generally to take a pluralistic approach to how we ought to act. I have an article about this in a fairly recent issue of The Journal of Medical Ethics, where I apply it to the life extension debate).

But in the end, he concedes that summation of "welfare" cannot be our only metric (while saying that at the extremes there must be some kind of metric - this is just starting to get good, and is as close as he comes to addressing the metric problem).

But he then fobs off the problem by remarking how certain moral questions are difficult to solve in practice, and that nothing untoward follows from the practical difficulty or impossibility of knowing the consequences of our thoughts and actions. So, it's as if this were just another case of not being able, in practice, to count the grains of sand on the beach.

Unfortunately, it's not like that. The paradoxes from Parfit, which Sam cannot solve, any more than I can without cheating, are not about the practical difficulty of knowing the consequences of our actions or of counting well-being units. They go far deeper. To treat them as if they are about practical difficulties is, alas, to make an elementary error.

Once again, I feel I'm being hard on someone whom I admire and consider an ally. However, I do wonder why he was satisfied with this passage, or why the people whom he consulted let it through in this form. It's probably better that I don't speculate here - it's presumably just an honest error, after all. But this passage provides an opportunity for the book to go to the next level ... and it actually starts to get there before dealing with the problem as if it were merely about knowing the practical results of our actions (something that Parfit is not on about at all, as TML itself makes clear).

The following section is also interesting, but, whatever else it does, it doesn't address the questions raised by Parfit (among other things it argues that perhaps we should continue to accept that people will be biased towards loved ones, etc., as they may actually end up working to maximise global well-being; it's a kind of rule-utilitarian argument, but not one that addresses the deep problems about maximising).

3 comments:

"...relevant to the "counting grains of sand" issue...whether we are concerned to maximise total units of welfare (or whatever) or average units."

I think that this is very different than the grains of sand problem, which is illusory and simply the result of an incoherent question by the critic.

This problem is real, and arguably approachable as just a problem of valuation. The question is whether, assuming that positive things are fungible into an intrinsically good black ink valuation and negative things into an intrinsically bad red ink valuation, the universe balances the books of its subsidiaries with simple subtraction and addition or with a more exotic function, such as multiplying profits by the ratio of profits to cash flow and combining them with a weighted average giving exponentially increasing weight to lower numbered institutions, which gives intuitively plausible results (using businesses as a metaphor for conscious creatures).

Rather, this is only problematic because Harris assumes that well-being has intrinsic value, i.e. that it is better for there to be a being existing in happiness/pleasure/(for others it is with arete)/etc. without it suffering than for it to not exist - regardless of that being's desires, how conditions are elsewhere in the universe, and even if those conditions are unaffected by the being's existence.

If instead morality is about reasons to act, and well-being corresponds to desire fulfillment, only extant desires and desires which really will arise matter. (Note: what desires will arise happens to depend on what happens now.)

Surely desires are normative reasons to act? I think others' internal reasons to act comprise our valid external reasons to do so. Can we not tell someone on pain of contradiction that he or she is arbitrarily valuing his or her own desires significantly and unjustifiably differently than others'? (Note: giving some priority to one's personal or kin's desires is probably wholly justifiable.)

This allows us to have objective morality without positing intrinsic value.

I apologize insofar as a comment positing a different, (possibly) viable moral framework is off topic when discussing the particular problems of a moral system. Even more apologies are due as these are the ramblings of an untrained person.

Nonetheless in our society people commonly conflate intrinsic value with objective morality, in much the same way religious people conflate evolution with abiogenesis (and even the big bang, hilariously) because they have always seen these concepts linked and produced by the same process. I think calling attention to this is important.

This is relevant to a discussion of Harris insofar as when he approaches the difficulty from the "conscious creatures" side of the problem, in my view he generally gets things right. He touches upon why the consciousness of the creatures is important, namely because they have desires and beliefs. When he approaches it from the "well-being" side, he gets things wrong by approaching it as if the objective truth of morality means it implies intrinsic value and is independent of subjective opinion.

I don't understand why Parfit's problems should be called "paradoxes"... they are not self-contradictions, they are the failure of some objective metric to live up to subjective expectations. Parfit shows that by choosing an appropriate metric you can generate many bizarre results. Like Maxwell or Planck we might need to invent some new math, but I expect we can match whatever you have in mind, "ad hoc", if you will only please hum a few bars.

... not to be picky but the "grains of sand" comments suppose that "grains" are unambiguously well-defined discrete objects, which is never true in the real world due to the "hanging chad" problem. We could ask: Is there a lower size boundary? As grains split, when do they count as more than one? etc etc etc. Theoretical difficulty here too.