Category Archives: moral realism

Duties never truly conflict. Unless they are truly categorical. But if they are not categorical, are they truly duties?

You know what, I gotta take a walk. Forget all that stuff I said before.

– Immanuel Kant (astral form) as related to me, 0300 June 8, 2018

Every act is a political act.

-Cain, to whoever would listen.

A baker in Colorado claims to have managed the feat. He said that the totally gay-free contents of his cake fulfilled his obligation to show love for the Baby Jesus. Because, as everybody knows, the Baby Jesus don’t like the gays. Wait. Strike that. The Baby Jesus loves everybody, so he just don’t like the gayness.

Anyway, this baker loved the Baby Jesus. He refused to bake any cake with any gayness in it, and in doing so, baked into each cake his duty to abide by the wishes of the Baby Jesus.

Some might ask how the baker’s achievement were possible. Cakes are made of flour, sugar, mixing and heat. You will never find respect for the Baby Jesus between the crumbs or under the frosting. But that assessment is not fair.

The folks who ask to see the duty in the cake (God bless their simple hearts) are the same ones who, when told that green experiences reside in the brain, ask to open up a skull to see the green inside. They like to hold the notion of supervenience upside down, because it seems easier to grasp that way.

But it isn’t so much that neurons and photons and retinal pigments add up to green; the point is that green experiences break down in certain, common ways. Admittedly, the difference is a little tricky to apprehend. It has eluded smarter folks than the poor bastards delving for green things in a pile of brains. Mistakes about the difference have led some very smart people to propose that we can get rid of green, and everything else. Instead of saying “green”, we can just hold up a balance sheet with all the retinal pigments, neurons and photons on it. But then we’ll need a balance sheet for the neurons, photons and retinal pigments, and so on and so on. You can’t get away without primarily localizing things somehow, and you always end up reaching for the balance sheet labeled “green” when you want to indicate “green”, and then you might as well just say “green” in the first place.

The same mistake about supervenience gives rise to the notion of emergence. Emergence is the balance-sheet scheme for those who just can’t let go of Aristotle (and a very uncharitable reading of Aristotle at that). The only thing on the balance sheet, in the emergent case, is something like a metaphysical time-share: property theoretically without exclusive ownership, but available for occupancy by a variety of occupants in turn. For green, the pigments, neurons and photons tally up to a certain critical point and then begin acting with ‘greenity’, which subsequently begins to explain everything else directly related to green. In the case of the cake, flour, sugar, water, heat, and so on tally up to a certain point and suddenly – cakeity. Ask the obvious question – where does the cakeity or the greenity begin – and the whole thing unravels, just like the more detailed balance-sheet scheme. You circle back to simply saying ‘cake’ and ‘green’, and ‘cake’ and ‘green’ then break down in certain, common ways. Each cake and each green perception has its own, unique identity, without a homogenizing property reaching down to bring it into the categorical fold.

Now we can get around to duty in the cake. Not only will we fail to find specks of duty among the crumbs, but we can’t expect it to pop out of the baking process, or even to be the sum of baking, Bible verses, and love of the Baby Jesus. That’s OK, though. So far, duty fares no worse than green, or cake itself. But it is worse for duty, because duty does not break down in any reliable way. It doesn’t even break down in any definitive way.

The baker baked a cake without any gayness in it, because he loved the Baby Jesus. He told the world, but he would have felt that he was true to the Baby Jesus, even if the baker himself was the only one who knew that there was no gayness in the cake. So then, the duty can’t break down to any relationship between ideas or even attitudes. Maybe it breaks down to just the baker’s attitude toward the Baby Jesus. But then you don’t have an account of the compelling part of the perceived duty, especially regarding gay-free cake.

Loving the Baby Jesus is just loving the Baby Jesus. In itself, the attitude does not contain any obligation. You can’t break down moral obligations (or any other moral “properties”) to a supervenience base. Therefore, we also lack reliable generalizations regarding moral obligations and moral representations.

You can’t even make a cakeity (emergent) case for duties, because duties don’t arguably emerge at some compositionally determined phase. Duties can pop up anywhere along the way, from turning on the lights in the bakery to accepting money for the cake.

The inevitable response to the above observation is an argument from incredulity which refers to the holocaust or infanticide. You can always say that it is morally wrong to throw a baby on the campfire, bake a gay cake, or exterminate a certain group of people, but such statements are always after the fact and are supported by historical fixation of the facts in the acrylic of moral terminology.

After all, moral arguments have been made in favor of all the above activities. And, the moral advocates have not differed with moral opponents of those actions on the factual contents of the actions; they have merely assigned different moral properties to the things and events which can, like a cake or a fire, be said to have a supervenience base, and about which effective theories are possible. In other words, moral ‘properties’ are merely attitudinal ephemera, pinned to the facts of the matter, whatever the matter may be.

Yes – let’s get that out of the way from the start. When presented with two piles of white granules a person can tell the salt pile from the sugar pile because the sugar pile is sweet. So much for the easy questions; on to the tougher ones.

What is sweet? Sweet is certainly not sugar, or stevia, or aspartame. It isn’t even a particular configuration of atoms and bonds in sweet molecules. Sweet is a personal experience upon which specific molecules, receptors, neurons, white granules, blueberries, and so on, can be mapped. Likewise, sweet is not sweet in and of itself, despite the fact that it is an entirely private matter. It maps onto other people’s experience, because those other people supervene upon certain, specific molecules, receptors and neurons in the vicinity of one’s own, and therefore in the vicinity of one’s own sweet experiences.

The great mass of interlocking phenomena realizes sweet, as much as anything gets realized.

Not everything in our linguistic pantheon is so lucky.

For instance, instances and their incidentals do not seem to realize moral properties.

We could sweep all the sweet experiences, with their related bits, into a neat pile and happily proclaim, “There is sweet.”

We could not do the same with moral good. There is stuff that won’t go into the dustpan, because moral terms are not simply rooted in our experience, like sweetness. Moral terms have a peculiar, sticky normativity to them which ‘sweetness’, and even terms quite similar to moral terms, such as ‘beautiful’, lack. Really, moralizing resembles sweeping together a pile of definitions for properties much less than it resembles curling.

Curling is a game played with a heavy stone equipped with a handle, a couple of brooms and a large sheet of ice. Teams of several players compete against one another. For each team, one player gives the stone a push across the ice sheet, while two other players frantically sweep the ice to speed or slow the stone’s progress. To win the game, a team’s stone must stop closest to a target painted on the ice.

The above is a description of curling, but it is not curling. Nor is the contents of the International Curling Hall Of Fame*, curling, Nor is the official curling rulebook. What the three intrepid curlers are doing out there on the ice – that is curling. When we say “curling” in reference to the structure of the rules, the stories of all the previous curling games, or a peculiar Canadian tradition, we speak in error.

Likewise with morality, which is not a set of stuff, a structure, or even a category of behavior. It is our most popular game, though according to Hemmingway it might really be a sport, since we play it to the death with alarming frequency. The rules are simple: align intention (as in the ‘aboutness’ of your attention ), truth (the bare contents of your intentional object) and motive (and of course there is but one motive).

When we think, “helping others is good”, the objects of our consideration are not specific actions, consequences, or even values. We can fool ourselves into thinking otherwise, but then we are browsing the Hall of Fame and telling ourselves that it contains the activity. In the Hall, we have the glass case of desired outcomes (good things). There is a spot on the shelf for reciprocal attitudes (the basis of helping). Yet the cases of items are merely tokens of success and failure.

When we set out to help someone, we have a perception of that person in a context with a certain shape and extent. A motive fixes our attention to the perception. Then, we act to reconcile the bare contents of our motive with the bare contents of the related perception. The activity is what we mean by ‘morality’.

For example, I am at the coliseum for some good, clean fun. The lions are just about to do their thing, and I spy little Claudius down front, crying. He is too short to see over the wall. If I am a simple man, disturbed by the child’s distress, I will boost him up to make him happy. If I am a more subtle sort, I will give him some instruction on how to find a better vantage point, so that he never needs another boost. If I am truly enlightened, I will take him out of the coliseum for a snack, because encouraging him to watch lions tearing prisoners apart as entertainment would contradict my impulse to help Claudius in the first place, since such impulses spring from an empathetic instinct.

Each helper can see the efforts of the other helpers as helping. Each sort of help is morally good. But the deeds, outcomes, and judgements are all secondary. The primary thing is an underlying psychological activity. And that is not a thing at all, just like curling.

Purpose gives life value. Most people would agree with that (false) statement without being able to properly explain what it means. To be fair, when its morally authoritative proponents speak of purpose in an existential context, they may mean one of two things. The intertwining intents make for a confusing narrative, so some untangling is in order.

The first, predominant meaning, is the common usage of ‘purpose’: instrumental to an extrinsic end. A good example this sort of purpose is the purpose of a humble noun.

“Cat” has an instrumental purpose. All it does is represent a certain class of lazy, mammalian parasites (who we love anyway). We could name the same category with a different phoneme and nothing would change. The sound and spelling derive their purpose from their use toward an end outside themselves.

The second, less commonly expounded thing to which moral leaders refer when they speak of existential purpose, is something more like ‘content’. The word then gestures at the richness of a personal story. On this account, Immanuel Kant and Idi Amin led purposeful lives.

Of course, lay-speakers often intend both meanings at once and also equivocate freely between the notions ‘instrumental’ and ‘full-of-content’. And lay-speakers cannot be blamed for the muddle. It is intentional.

We are all told, explicitly and implicitly, morning to night, from birth to death, that content comes of instrumental purpose, and one justifies the other. Our religions tell us this. Our politicians tell us this. Our employers and professions tell us this. And they all tell us that this mechanism gives value to existence.

The pervasive message of human civilization is: instrumental purpose makes purposeful content, makes value. But that is not how we work. Acting as an instrument may serve as a means of expression, but expression of motive (will to power) actually produces the value of our personal stories.

The endpoint itself makes no difference.

Nor does the report of our lives’ content capture their value. A slave may live a wild adventure from crib to deathbed and still, rightfully feel cheated. To think that the endpoint. and content generated in pursuit of that endpoint, themselves yield value, is itself a moral failure

On a certain level, my life’s career has been an in-depth study of fear. It has had a hold on my imagination since my first nightmare.

That dream was a standard horror. I was running from something invisible behind me through a dark, tangled wood. I tripped, and recovered, but I could tell that the missed step had cost me my chance to escape. Just before my pursuer caught me, I woke up.

Instead of going back to sleep or crawling into bed with my parents, I lay awake wondering exactly what was chasing me and why I feared it.

In retrospect, those first questions about my nightmare led to a decades-long exploration of emotional aversion. From fights, to speed, to height, I fixated on the subject. It was an unconscious enterprise at first, but eventually I began to reflect on what I was doing. Through reflection and reading, I learned something about fear beyond instinctive familiarity and mere control.

In summary, fear is nothing more than emotional aversion. It is the feeling of motive turning aside, and as riders on motive, fears present themselves to us as motivations present themselves to us. To borrow Nietzsche’s formulation, they come to us unbidden. I no more choose to be afraid of being hit than I choose to start paying attention to time’s passage when I wake up in the morning.

In a certain sense therefore, we may be exonerated for our fears. They happen to us. However, what happens to us, makes us, and we accrue responsibility by and for our constitution. That is the vey nature of moral responsibility, as opposed to the sort of responsibility we take on when we park our car in a handicap spot, for example.

To follow this example down the line, parking in the reserved spot may carry a whole, separate load of moral implications, of course. Fellow citizens may hold us in moral contempt for the act of parking selfishly. Some of our neighbors will even find a statement of intent to park in the reserved space as morally offensive as the act itself. The city cop doesn’t care about motives. His concern – the law’s concern – is functional.

The act of steering your car into the slot suffices for the law, no matter how the officer may feel about it, or your intent. You get the ticket, even if you have suffered a stroke the day before and are handicapped, but lack the proper permit – a situation which absolves you of moral responsibility in the eyes of most people.

The point is: morality is not a set of laws like the municipal codes. If I do not want a ticket, I ought to avoid parking in a handicap space, if I don’t have a placard. Not parking in the handicap spot, definitively makes me a non-violator. However, no such action will make me good.

Morality is not a set of facts in the world. I can’t look at the handicap spot and say that it is 25 square meters, blue, bright and benevolent.

Morality is not a set of sentiments. I can feel sad about having to bypass the handicap spot and park in the boondocks. But, I will also feel sad about actually parking in the spot, if I am good.

Moral responsibility resides in global action, not circumstance.

In the latter sense, we may not be exonerated for our fears. Our emotions are inseparable from the motives which birth them. So all of our emotions have a latent moral dimension, because the moral nature of our actions depends essentially upon our motives. Morality appears to be the process of reconciling motives, the psychological conditions which evoke those motives, and the truth.

And if morality is a class of activity, rather than a formula or a set of real properties in the world, then fear carries the greatest moral weight of any of our emotions.

All other emotions follow from their associated motives, but fear has an echo. Within the individual, anger does not evoke anger and admiration does not evoke admiration (except perhaps in a really committed narcissist). However, fear evokes fear.

Our impulse is to turn away from our aversion, resulting in a spiral which orbits farther and farther from the truth.

Today was a climbing day, and I woke up tired. This happens with some regularity, and I have learned not to put too much stock in feelings of early morning fatigue. Like delayed-onset muscle soreness, tiredness is part of life’s Muzak.

I have learned to just get up, move around a bit, and turn off the thought process until the first 8 oz. of coffee get into the moving parts. Then, I can take a breath and figure out what I ought to do. Sometimes, I figure I ought to go climbing, less frequently, I figure I ought not.

I did not go today. There were traffic issues, household chores, homework for the kids, and an empty fridge, all weighing on me. But I could ignore those trivialities if the day looked promising from a climbing standpoint. If I had a good day out, I would return with motivation to spare for shopping, vacuuming, and glaring at a teenager while he did everything in his power to avoid completing an English research project on time.

However, today did not look promising. When I thought about the plan, I could not get my motivation to gel around the climbing which lay in store. Of course, a sort of meta-motivation was there, driving the self-assessment process.

Meta-motivation is part of the Muzak too, and is the explanation for why I actually get up when the alarm goes off, instead of following my tiredness back to sleep.

I can climb on the meta-motivation. I have climbed on the meta-motivation. It depletes itself, though. It relies on ambitions and creates them – getting to the next level of difficulty, getting payback on the route that thwarted me, keeping up or catching up with partners. Leaning on the meta-motives fails to reconcile the day’s motives with their sources in one’s emotional state, severity of muscle fatigue, metabolic state, etc. It works for a while, but the sources will not be ignored forever, and come back around to bite in the form of injuries and burn-out, neither of which can be overcome by ambition.

The day’s motive is the real thing, not the desire to realize plans and ambitions. Too bad it is so slippery. It can be reconciled with its sources in principle, but understanding the depth and relevance of the various sources is tricky.

The climbing-day ritual, in which motives get explored and reconciled with current affairs, is a moral endeavor, of sorts. Through it, I learn what I ought to do, and in a way which cannot be attributed to a calculation of debits and credits, or simple puzzle-solving, in which I just match up pieces of motive and facts at hand.

I think maybe that’s the way it is with all moral endeavors. They aren’t problem-solving with moral facts. All moral evaluations seem to suffer from the troubles of theodicy, if they are factual. The explanation for the existence of evil in a world ruled absolutely by a good God eventually defaults to the relevance of evil in light of God’s (infinite) magnitude. But all things go to zero along that asymptote. So it is with the determination of moral facts. One moral fact may always supersede the next, looking forward, and the qualifications proliferate endlessly in retrospect.

If that’s the case with the pursuit of moral fact, then pursuing moral fact is much like climbing on meta-motivation. The chase will lead to diminishing returns and, finally, to contradictions.

The Hateful Eight is a Western movie by Quentin Tarantino. The title is a reference, if not an homage, to the famous Western, The Magnificent Seven, which is an American take on Kurosawa’s film, Seven Samurai.

Though this post will examine the plot and characters in The Hateful Eight, there is no need for a spoiler alert. Representations of art cannot spoil the experience of art. That is because true art is not didactic. It is about what it depicts rather than being a diagram, so experiencing the art is everything, and knowing things about how a work of art is put together can never substitute for the experience.

Whether you think The Hateful Eight is good art or bad, it meets the criterion above. The film does not document the Western landscape, rugged individualism, or violence; it is about the Western landscape, rugged individualism and violence. I happen to think it is pretty good art, but I hate it anyway. Let me explain.

In the first part of the movie, we learn about the characters, who are all forthright, tough individualists. They have come West after the Civil War. They have come to be free to be themselves. They have come to be free of their pasts. They have come to get away from the hell of other people.

On a long stagecoach ride, the rugged individualists recount all the ways in which they have stuck to their principles, no matter the cost. They have been heroes in war and agents of justice afterwards, no matter which side they championed. What matters is that they have championed something, and have served blind justice.

But then, the stagecoach stops at a lonely outpost. The conversation moves indoors. Other people become involved. And in a ugly crescendo, we are shown the consequences of unyielding principle, and an ethic which extolls championing one’s principles as a virtue in itself. The result is scorched earth, and an endless cycle of vengeance chasing death, all sustained by the moral satisfaction which comes of living a principled life.

As the cycle plays out, the Hateful Eight sacrifice others and finally even themselves, a piece at a time, in the name of family bonds, racial justice, legal justice, and cultural allegiance. If the first part of the film invites the audience to share a draught of moral satisfaction with the characters, the second part challenges us to keep on drinking as it all turns to blood.

Because, the narrative doesn’t change as events on screen descend into an orgy of violence. The action is cartoonish, but the actors do not play it tongue in cheek. They do their best to keep it real. Their efforts seem pathetic at first, then sickening, as each side in turn slakes its thirst for justice on the suffering of the other.

At some point, the film invites the viewer to turn away from the escalating grotequerie, and when the viewer does turn away, that’s when the film really becomes art. Because, veering off in disgust is a hypocritical act. The audience hasn’t earned the right to look away. We were just admiring the characters for the very traits which generate the revolting atrocities in the second part of the film.

And haven’t we engaged in the same hypocrisy in real life, whenever we’ve bought into the Western-spirit myth of self reliance, toughness and self- righteousness without acknowledging that that same spirit has just as often manifested as selfishness, callousness and zealotry? We love Lewis and Clark; we choose to forget Wounded Knee. We admire Custer’s bravery at the Little Bighorn while we stubbornly ignore the intentions which led him to that spot. We buy into the nasty Western contradiction every time we choose to watch a Western movie.

Yet the film’s indictment is flawed. We do turn away, so we can make the distinction between, for instance, Bill Hickok and Emil Reuter. In illustrative contrast, the original film in The Hateful Eight’s family tree recognizes the schism between our moral ideals and our emotional reflexes.

The young samurai who idolizes the leader of the Seven Samurai expects glory and honor from defending innocent villager from a gang of bandits. What he gets, in the course of achieving his victory, is one bitter loss after the next. He finally turns away too, and although he achieves some peace in understanding that the choice to fight is merely one grim option among many, he must also accept that there can be no moral equation which resolves those choices. The last scene questions whether his own choice really is worth it – and if he could even know anymore, having made the choice.

Tarantino’s film points an accusing finger to the same end, but aiming the finger sustains the cycle of judgement and reduction. Sure, it brings us in and makes us feel what it’s really like inside, but it is an oversimplification. It dodges the hard questions which arise in arguments about just wars or the enforcement of human rights. It leaves open the possibility of moral equations.

So, though I hate to say it, I do hate The Hateful Eight. And I hate that that is my inevitable conclusion.

My twin and I share two pair of identical running shoes. One pair is green, the other, gray. The shoes are otherwise indistinguishable. I wear the green shoes exclusively. I have won many races with them, and I consider them lucky.

My twin wears either pair. He cannot tell the difference between the two sets because he is color blind. He runs just as well while wearing the green shoes as he does while wearing the gray shoes.

I flounder in the gray shoes. He can beat me every time if we trade colors, because the duller pair does not recall soaring victories. The gray shoes mean nothing to me.

Though the difference between the shoes is entirely subjective, it is nonetheless real and it is true that the greenness of the shoes means something, even if no one knows it but me.

Now, you can say that I am silly for evaluating the shoes by color. You can say that I’m doing it wrong (if you have a solid alternative to present). But you can’t say that a subjective evaluation, with attendant meaning and minimal truth (and really, what else is there?) inherently fails and is not real necessarily.

Well, I guess you can persist in insisting that subjective evaluations are not real, if you want to branch off into a dispute about what makes something real…

There is an interesting post here about jargon. It explores one of the useful aspects of jargon, and as a consumer – indeed a purveyor – of jargon in the medical field, I completely agree. Technical terms give us simple clarity, and simple clarity is one of the most useful things around.

The post focuses on the utility of jargon within its natural environs – dialog between professionals, where it is quite useful as shorthand. As an example from my world, when I say ‘appendicitis’ to someone in the medical field, a fairly specific array of physiologic and anatomic processes comes to mind, along with their likely manifestations, consequences, implications for diagnostic testing and treatment, associated research studies, etc.

The conversation can move right along. Plus by way of its scope, the use of technical terms can serve as a check point in the dialog. If there is a malapropism, it is apparent.

When a colleague says, “The negative ultrasound ruled out appendicitis..”, the conversation must stop. We must clarify why he thinks that the ultrasound ruled out appendicitis, because it is commonly accepted that ultrasound does not, in and of itself, rule out appendicitis. The term ‘appendicitis’ as jargon, contains the understanding of its diagnostic criteria for those in the know.

The situation is different when a patient says, “I think I have appendicitis.”

Typically, the lay person who makes that statement knows little to nothing about appendicitis. The word refers to little if any of the content it carries when I mention it to a surgeon. However, the same process flows from its use, or rather misuse.

The lay person’s usage brings up the question, “Why do you think that you have appendicitis?”

In other words, technical terms provide some solid surfaces in an otherwise squishy conversational world. If we can’t alight upon them, then at least we may bounce off of them in some direction, rather than landing splat in misunderstanding or mere conflict.

The common complaint that jargon is obfuscation doesn’t hold up when we consider the honest usage of technical terms, even outside of their professional environment. There is, however, a dishonest way of deploying jargon.

The current poster-child for such corrupted terminology is ‘mindfulness’. In its original sense, the word referred to a non-reflective state. The idea was: your mind stays fully engaged with what is happening in its scope of awareness, without reaction or abstraction. It was the kind of thing which dart players, test-takers and athletes sought.

Now, though it still gets used to mean engagement with the present, it may also stand for a state of detached self-awareness, in which one is monitoring and regulating one’s responses to one’s present situation. Clearly, the latter meaning is at odds with the former, if only because the latter refers to an essentially reflective activity. Dishonest users of the term shift back and forth between the meanings depending on the goals of the user’s discourse. If the occasion is a corporate retreat aimed at promoting harmony in the workplace, the second meaning is used. If the speaker wishes to convince the listener that chronic back pain does not require morphine if one simply ceases to reflect upon said pain, then the first meaning of mindfulness is implied.

Clearly, the sort of shenanigans at work when people bat around ‘mindfulness’ are what give jargon a bad name. Mindfulness started out its career innocently enough, as something which Zen practitioners and coaches discussed. But along the way, it picked something up. As something useful, it came to possess an air of desirability. As something desirable, it acquired the reputation of being something good, and then, of being good in itself.

Once imbued with moral character, the technical meaning of mindfulness, along with all associated contents relating to its use, became subsidiary. Being mindful became less important than being a mindful person, and when a moral role presents itself, it is open for definition. The corporate lecturer can tell us what a mindful person does at work. The pain specialist can tell us how a mindful patient takes medicine. The roles make the meaning henceforth.

The situation seems at least a minor victory for the moral expressivists – those who claim that our moral claims are not claims at all but expressions of sentiments like approval and disapproval. It would be a victory too, if the abusers of technical terms were actually making moral statements. But they are not.

When people utilize a bit of jargon with moral character, they are using it as a means to an end. They are weaponizing it. The listener doesn’t receive a sentimental expression from the speaker; the listener is invited to fill in the sentiment. The audience at the corporate retreat must make the connection: a weekly post on the suggestion board means I am mindful, which means I am good. That line of thinking isn’t really moral reasoning; it is a facilitated rationalization.

Jargon as a technical tool is not the problem. Yet, we are right to be wary of jargon. Its use should put us on the lookout for manipulation. But we should not be afraid to use it either. We must just take care to use it mindfully, by which I mean being critically aware of one’s attitude toward the current subject, which was once known as being an adult. Oops…

On a cold morning, a little girl named Suzy is waiting for the School Bus at the bottom of a steep hill. It was raining the night before, and water has been flowing next to the curb. The water froze in the early hours of the morning, forming a sheet of black ice. The ice sheet extends all the way down to Suzy, and unfortunately for her, passes under the tires of a Cadillac Coupe DeVille parked in the middle of the hill. As the sun hits the hill, the ice loses its grip on the tires and the car slides silently and rapidly down the hill, striking Suzy and killing her instantly.

Now suppose the same chain of events ensues, except this time, the car breaks loose just as the cars owner, Andy, sits down in the driver’s seat and closes the door. The inside door handle is broken, so he can’t just jump back out again. The power windows are up and the horn doesn’t work, so he has no way to warn Suzy of her impending doom. He desperately turns the wheel, but it’s too slick for the tires to grab. Suzy dies just as in scenario #1.
Again, suppose the circumstances are the same, but this time, the owner of the car is different. Let’s call him Brian. When Brian realizes that he is sliding out of control, he thinks, “You know, I’ve always hated that little bitch anyway,” and he turns the wheel to direct the car toward little Suzy. Again, the tires have no purchase on the ice and the chain of events is unaltered.

Is there a moral distinction in the incident between the unoccupied car and the occupied car?

Between the incident with Andy and the incident with Brian?

If so, where is the independent and objective moral fact in each case?

Imagine that none of this actually happened, but that Andy and Brian each dreamed the same dream, in which they behaved as they behaved. Each wakes with a sense of satisfaction about his own behavior in the dream, and goes on to live an impeccable life thereafter, never harming a fly. Is there still a moral distinction to be drawn between the two men?

When we speak of morality, are we describing a fact with inherent causal efficacy – like a runaway Coupe DeVille – or are we describing an attitude (or the formation of an attitude)?

I have a purple shirt, or maybe it is royal blue. I was never in doubt about the color until my wife called it blue one day. Up until that point, I never even contemplated calling the shirt blue, or that there might be a difference between my perception of the shirt’s color and her’s.

Maybe there still is not a difference. Maybe our perceptions are the same and the words we use differ unnecessarily. If I look hard, though, I can see how she would call the shirt blue.

Her and my perceptions are almost certainly not the same, nor are anyone’s. The alternative – that people disagree about colors, and so much more, because our language is massively mistaken – seems too incredible. Shouldn’t we have ferreted out even the most minor issues by now? After all, we do so well at finding agreeable words for so many things, even in the realm of aesthetics.

Plus, there is a good explanation for the source of disagreement between me and my wife on my shirt’s color. If one tracks back how each of us learned to classify blue and purple experiences, there are substantial differences. And, those differences do not only effect our use of words; those differences also condition our purple and blue perceptions .

Yet there is another problem lurking. Even if I could magically take a snapshot of my brain at the moment in which I saw the shirt as purple, and show it to my wife, not as a map or photo, but as exactly the same state of affairs imposed upon her neurons, she could still differentiate it upon reflection. The brain state in question would always be her experience of my experience, rather than simply her experience. My experience of the shirt’s color cannot be captured, as mine, by means of physical reproduction.

One might ask, who cares? The upshot of our limitations is tolerable. Big truths may be a little counterfeit by implication, but we are accustomed to working with flawed notions already, and do fine by it. For example, Newtonian mechanics serves us beautifully, even if it is not ‘really true’.

Yet, we do not tolerate our flawed notions. An optimist would say that we are not satisfied with lesser things, and are constantly trying to improve our understanding. Our behavior suggests otherwise, however. We want big truths in principle, and the certainty, the reality, that comes along with them. In physics, we don’t just want quantum mechanics and relativity, we want a theory of everything. In ethics, we want good and evil, and duties to serve.

So, the hard problem does matter, because it is motivating. And, it moves us to a harder problem. We want things to be true which are not merely false, but which are incapable of being true or false. The idea of a concept not being truth-apt is slippery, so an illustration is in order.

Consider the case of Baby K. Baby K was born over two decades ago without a brain. Not only was she(?) born, she pulled off a feat which few anencephalics manage; she lived more than briefly. Or, she maintained a metabolism more than briefly, because her status as a living thing, much less a living human infant, was in question. She would never see a purple shirt, or a blue shirt, or have any experience at all. And since our personal experience is what we value above anything (what choice do we have, after all?) some people felt that a creature without experience and incapable of it was not truly alive, much less human.

Baby K’s mother disagreed. She felt that K was born of a human, exhibited some behaviors, had a heartbeat, and therefore fit into the human peg-hole, albeit imperfectly. K’s remarkable persistence owes to her mother’s insistence on aggressive medical interventions for K, based on K’s status as a human baby. For K’s mother, the rules of classification were categorical. There are Forms in the world, according to this school of thought, and the Forms suck their creatures in, even the most flawed copies.

When Baby K had trouble breathing, her mother took her to the ER and demanded that Baby K be saved, put on a ventilator, and nursed back to health in the ICU. But was health one of K’s capabilities? She needed saving, but for what, and from what? We could not ask K about any of this, ever, even in principle. As her physiology counted down to its end, what was there to distinguish this tick from the following tock, and so provide a basis for valuing more of the physiological process?

When K came in to the ER, the professionals on duty did not want to treat her. Since she was incapable of experience, she had nothing to value (there wasn’t even anyone there to value anything). Efforts to ‘help’ K were therefore empty. There was nothing to help with and no one to accept the helpful gesture.

Remarkably, some argued that further medical interventions merely prolonged K’s suffering. Perhaps they meant to say that further interventions caused the staff to suffer. More properly, futile actions degraded the integrity of the medical professions. We become what we practice, and if the medical professionals practiced service to the beating heart, then they rightfully feared that they would become servants to the beating heart.

The hospital also expressed concerns about the resources that K consumed. This argument was a utilitarian argument and failed in the usual fashion. If K did not occupy the ICU bed, the bed would not move to an under-served area, nor would the unexpended cost of K’s breathing tubes and procedures be converted into mosquito nets for children in malaria-afflicted territories. Values are not generally translatable, any more than their costs are portable.

But the missing cipher in the professionals’ calculation was K’s value to her mother. Someone did experience K’s physiology after all. To waive K’s value on that account was just as degrading as crass service to the beating heart. If the medical professions seek to serve health, and health is function, then the milieu is everything. It was a mistake to consider K’s value on the basis of K’s intrinsic capacity for experience, just as much as it was a mistake to think that the ventilator was saving K herself from or for anything. However mistaken she was about Forms and their efficacy, K’s mother valued K’s beating heart in a consistent way. Harm would come to the mother from K’s heart stopping. It would be the same sort of harm – loss of experience and the possibility of experience – to which the professionals referred in their assessment of K’s lack of value.

All along, the players in the Baby K saga evaluated her with standards that did not apply – that were not truth-apt. It was never the case that Baby K was human or not, alive or not. Her case nicely demonstrates the nature of the harder problem. Our standards – good, evil, human, matter, energy, mine, yours, blue, purple – are not stand-alone things. They are made of their circumstances (our circumstances). Without a doubt, the standards serve us well, since our circumstances are necessarily shared. If the standards refer to the specifics, and the specifics are near enough alike, it’s just good fudging to defer to the standards. It is easy to forget that the standards defer to their instances. And we are motivated to forget, because we value our experience and we value our standards, and we are prone to equate the two.