My bros and I believe that, regardless of an omnipresent god, each bro should live to benefit the greater good. But then, that got me thinking, “What does it mean to live for the greater good? Does it just mean to live with good intent?” Well, if Batman’s parents were never murdered, there would be no Batman. Thousands of bros wouldend up suffering at the hand of criminals who would havenormally been stopped by the fearsome superhero.So, was the murderer of Batman’s parents living for the greater good? Or was he just a jackass?

Mascot, I’m a bro who knows his audience, and I have to tell you - coming out in rousing defense of the man who murdered Batman’s parents is conceptual suicide. I mean, yeah, in the end he gave us the Batman - but that’s like crediting Hitler for the development of synthetic rubber. Sure, WWII necessitated research into synthetic rubber by the Allied forces, and they successfully developed it. But surely the Axis doesn’t get the win there; you know who gets credit for all the good the Batman does? The goddamn Batman, that’s who. He’s not right behind me, is he?

Seriously though, this is an important question in utilitarianism/consequentialist thought - what exactly does it mean to act for the greater good? Let’s assume we know what the 'good’ actually is - maybe it’s pleasure, maybe it’s the pleasure/pain ratio, whatever. Should we act in whatever way we think will maximize good, or should we act in the way that will actually maximize good?

Yeah, that’s a confusing question. It’s okay, don’t panic. I know when I first heard the question, it didn’t make much sense to me - obviously I should act in whatever way I think will maximize good; that’s how I know what the right act is. How can I act to maximize actual good if I don’t know what actual good is? Expected good is all I’ve got. Besides, it would be crazy if murdering Bruce Wayne’s parentsaccidentally turned out to be the right thing to do, right? So we apply some decision criteria, I add up the relevant 'good’ in each of my options to the best of my ability, and then I fucking go for it. Easy. I’ve done the right thing - I’ve acted, to the best of my knowledge, to maximize the good. That’s all anyone can ask of me… isn’t it?

Not so fast, me-ten-seconds-ago. Are we seriously to be expected to calculate the expected outcomes ofall of our actions? Because… that shit is ridiculous. Not to mention impossible. And how exactly should I be expected to identify the good in an outcome? Lets say I randomly roll dice for what I think are my top five possible actions, and call that the “expected utility” of each action. I probably don’t get credit for doing the action with the highest roll, even if it ends up being the best option; that’s not a legitimate procedure to decide what the “greatest good” is. I got lucky. But what is a legitimate procedure? Today I helped a little old lady across the street. It’s just a thing I do sometimes, because bros can be good people too. I thought about the outcomes I could picture, and settled out one that I really liked: little old lady makes it safely across the street. Achievement Unlocked: “Greatest Good Calculated,” amirite? So I went for it. And since I’m such a charming conversationalist, we had a delightful talk about her swimsuit model granddaughter, who just loves intellectuals and apparently I’m exactly the sort of nice guy she HOLY SHIT A RUNAWAY MONSTER TRUCK.

If I had been paying attention at all instead of talking to this lady while I helped her, she wouldn’t have been run over, which sucked. Now, I did the thing that I expected to maximize utility, but I missed a giant, green and red factor sitting on 44 inch tires. Seriously, I really should have noticed. Totally a dick move, not noticing. This outcome was foreseeable, but not foreseen. Still, did I do the right thing? Can we just blindly stumble through life, doing things that seem like they will probably result in some good? If not, how much attention do I have to pay before my expectations become legitimate? Is it enough that I use some reasonable decision procedure, calculating the outcomes I can reasonably be expected to foresee and ignoring the other, crazier ones? How much can I leave to chance? These are tough epistemological questions. Any system which considers expected outcomes as morally relevant must have a reasonable way to decide exactly how those expectations should be arrived at.

On the other hand, philosophers who take seriously the idea that moral propositions, statements about the rightness and wrongness of acts, are objectively true, also take seriously the idea of an objectively greater good. This isn’t Kantianismup in here, bro; if you want your intentions to matter, you’re at the wrong kegger. The right acts are the acts which actually create the most good. Getting little old ladies across streets is a good act; getting them run over, not so much. Maybe I’m not blameworthy - I did my best and all - but for those of you keeping score at home, I fucked up. For these bros, the sort of decision procedures used to determine expected good are just helpful rubrics, guides to maximizing good that will more often than not work. So my decision procedure should help me make as many right calls as possible - sometimes I’ll have to round off and hope for the best, since if I take the time to calculate every possible outcome, I’ll never get anything done - and that’s the best I can hope for. By picking a rule that works more often than not, I can in the long run maximize actual good.

But back to the Batman, if I may; does this mean that our murderous thief actually does gets credit for the ass-kickings that Batman hands out on the reg? Some bros are willing to bite the bullet and say “Yeah, this time he accidentally did the right thing, but it was definitely the right thing.”

Think of all the other shit that goes into those beatings - Bruce’s dad’s decision to go to the opera that night, the training Batman received at the hands of Ra’s a Ghul, the bro who accidentally built Wayne Manor over a fucking cave. Without any of those things, Batman can’t beat criminals to a pulp night after night keeping Gotham safe. Do all these bros receive partial credit? Does only one of these factors really count? Any theory which depends on objective outcomes should provide at least some guidelines as to how far down the causal chain responsibility carries. That’s not a hard thing to do, just an important one.

I have good news, Mascot. It turns out whichever of these is true - subjective consequentialism, which focuses on expected outcomes, or objective consequentialism, which focuses on actual outcomes - you’re going to need some way to decide what to do. Don’t let all the possibilities paralyze you; if you want to be a consequentialist one way or another, you should use the decision criteria that you think will most consistently produce actual good. That way you’re covered in the long run either way. Probably.

–The SEP page on Consequentialism touches on these issues and other issues related to the Greater Good.

The Wikipedia page is also very thorough, though not as technically informed.

J.S. Mill’s Utilitarianism is easily the most famous consequentialist text, and he advocates a rule-based utilitarianism.