Bot 1: Not everything could also be something. For example, not everything could be half of something, which is still something, and therefore not nothing.

Bot 2: Very true.

Bot 2: I would like to imagine it is.

Coyne’s comment?

This comes perilously close to the ontological argument for God’s existence.

Um, no. No, it doesn’t. It’s not at all like it. There are no concepts or even forms in common between them. They would even be aiming at different ends, since the Ontological Argument is a purported proof of the existence of God and the Bot Argument works best as a defense against a claim that God does not exist.

Let me outline the OA again in simpler terms:

Premise: God is a perfect being.
Premise: A perfect being is perfect in all its qualities.
Premise: Existence is a quality.
Premise: To have perfect existence implies that the thing exists.
Conclusion: A perfect being must exist.
Conclusion: God must exist.

You can actually eliminate the first premise and the first conclusion to get the bare bones form, but this is what’s required to get to God. Now, looking at that, does it look anything at all like what the bots did?

Coyne also seems to think that what the bots did counted as theology. Let me examine it as if it really was:

Theologian A – What is God to you?
Theologian B – Not everything.
A – But if God is not everything, that doesn’t mean that God doesn’t exist. After all, something that is half of something is clearly not everything, but still exists (ie is not nothing).
B – True. But the perfect God must be everything, as it must be infinite in its attributes. Thus, if you concede to me that God is not everything, you concede to me that God does not exist.
A – Then I do not concede that. Why, then, do you say that God is not everything beyond assuming that God does not exist and so is nothing.

So, no, these bots can’t generate real theology, since real theology does a lot more reasoning and analysis than they do, and doesn’t stop at a “I hope this is true” except as a the conclusion of an argument about why you might want to believe based on hope.

And, Dr. Coyne? I came up with this in five minutes while reading this while waiting for my dishes to soak enough to clean. It isn’t hard to find better theology than this even using the premises given by the bots. And I don’t even do theology.

Maybe I’m reading the wrong books, and maybe the world needs me doing more theology and less philosophy and programming. Think I could make more money if I wrote a book about theology?

I’m not popular enough to draw the attention to get these sorts of questions answered, but I’d like people who think that “The Problem of Evil” is an issue to basically think about this:

If I deny that I care about having a “loving” God, but claim that omnibenevolent translates to “all-good” which, to me, translates to “perfectly moral”, how does that impact the Problem of Evil? Does it change the approach in any way? Are there any different considerations? Or does the question work out exactly the same way?

Because, to me, I want a moral God, not necessarily a loving one. And a moral God is required for either solution to Euthyphro’s paradox. But if I analyze the Problem of Evil in terms of morality, I find that by most of the major moralities the Problem of Evil is not, in fact, any kind of disproof, because all of them could allow for allow suffering to exist on the basis of other criteria, criteria that we think might exist in this world. Love, however, might not, depending on how one defines it.

Currently in Shadow Hearts: Covenant, I’m lost in the Neamton Ruins while looking for the Shanghai Heaven magazine so I can get the white underpants, with enemies that I can easily beat but that have 800 HP and drain magic while I kill them, leaving my magic for healing and the like a bit low. Annoying, in other words.

The article starts by talking about Steve Jobs’ management style, which is classified as confrontational and angry:

In the summer of 2008, when Apple launched the first version of its iPhone that worked on third-generation mobile networks, it also debuted MobileMe, an e-mail system that was supposed to provide the seamless synchronization features that corporate users love about their BlackBerry smartphones. MobileMe was a dud. Users complained about lost e-mails, and syncing was spotty at best. Though reviewers gushed over the new iPhone, they panned the MobileMe service.

Steve Jobs doesn’t tolerate duds. Shortly after the launch event, he summoned the MobileMe team, gathering them in the Town Hall auditorium in Building 4 of Apple’s campus, the venue the company uses for intimate product unveilings for journalists. According to a participant in the meeting, Jobs walked in, clad in his trademark black mock turtleneck and blue jeans, clasped his hands together, and asked a simple question:

“Can anyone tell me what MobileMe is supposed to do?” Having received a satisfactory answer, he continued, “So why the fuck doesn’t it do that?”

For the next half-hour Jobs berated the group. “You’ve tarnished Apple’s reputation,” he told them. “You should hate each other for having let each other down.” The public humiliation particularly infuriated Jobs. Walt Mossberg, the influential Wall Street Journal gadget columnist, had panned MobileMe. “Mossberg, our friend, is no longer writing good things about us,” Jobs said. On the spot, Jobs named a new executive to run the group.

Well, there are good and bad things about this, and various ways that anger can be really, really bad if you don’t know what you’re doing. I’d argue that, anger or not, what Jobs did here was basically demonstrate accountability. They were working on a product, the product didn’t work, he berated them — holding them accountable for their failure — and then made a direct action by sacking the executive who’d overseen this and putting someone else in that person’s place to, presumably, get the thing working right. So I’d agree that if the alternative is some kind of soppy “Well, we’ll get them next time” or a minor “We’ll have to work on this”, I’ll take the more bombastic approach. But do you imagine that it would be less motivating for the head of the company to come in and coldly appraise their situation, pointing out all the facts, declare that it was a failure and changes would be made, with a new executive in place and other consequences? Maybe it would be less motivating, just due to the influence of passion. But I think that the accountability — and making that clear from the start — would work mostly as well.

The problem with anger here — and everywhere, in my opinion — is that you had better be right. The cold guy looking at the numbers is simply heartless, but you know that he really doesn’t care about what the factors where. You didn’t succeed, and whether that was because you were incompetent or whether that was due to external factors is irrelevant. If the cold guy is good at the job, he’ll be able to answer what you ought to have done to overcome that — and if analytic or knowledgeable enough, already knows who’s to blame and has taken steps. Or, at least, you think that. But if Jobs is ticked off, did he really analyze all the factors and blame the right people? Maybe he isn’t aware that it was held up because the hardware guys on the IPhone decided that the MobileMe wasn’t going to sell and didn’t support it. Maybe there was internal competition that messed things up. Maybe there were actual problems with the hardware that really caused the issue. Maybe that group did everything they possibly could to make that work, and simply couldn’t do it. Berating them for failing when they couldn’t succeed will generate anger itself, and if that was the case expect a ton of people to just quit. Why? Because he’s bascially accusing them of being incompetent when they weren’t, and they have no reason to be confident that he did enough thinking to see that. And honestly, people don’t forget that. They’ll take it if they think it’s true, but if they don’t the negative feelings will linger.

Ultimately, if you’re going to be angry, you had better be right. If you are angry and aren’t right, you’re going to have to make very sheepish apologies to avoid the hard feelings. A more analytic approach that doesn’t actually cast blame but takes steps works better. So, in some sense, in contrast to what I said above, sometimes even drop notions of accountability, at least in the sense of blame. When you remove that executive, don’t make it sound like it was that executive’s fault or that they’re to blame. Make it so that it’s simply the case that they were not the person to make this work. If they are, in fact, incompetent, then do sack them for it. But if you aren’t sure, don’t cast blame. Just move on. Then there’s no real room for negative feelings.

Anyway, after that digression, moving on to anger being creative, and a possible counter to my suggestion that you don’t want people to get angry:

That, at least, is the takeaway of a new paper by Matthijs Baas, Carsten De Dreu, and Bernard Nijstad in The Journal of Experimental Social Psychology. Their first experiment was straightforward, demonstrating that anger was better at promoting “unstructured thinking” on a creativity task, at least when compared to sadness or a neutral mood. The second experiment elicited anger directly in the subjects, before asking them to brainstorm on ways to improve the condition of the natural environment. Once again, people who felt angry generated more ideas. These ideas were also deemed more original, as they were thought of by less than 1 percent of the subjects.

Now, this summary is dealing with brainstorming, which is what Lehrer claims was based on a non-confrontational approach:

In the late 1940s, Alex Osborn, a founding partner of the advertising firm BBDO, outlined the virtues of brainstorming in a series of best-selling books. (He insisted that brainstorming could double the creative output of a group.) The most important principle, he said, was the total absence of criticism. According to Osborn, if people were worried about negative feedback, if they were concerned that their new ideas might get ridiculed by the group or the boss, then the brainstorming process would fail. “Creativity is so delicate a flower that praise tends to make it bloom, while discouragement often nips it in the bud,” Osborn wrote in Your Creative Power.

I’d have to look at the study in detail, but I strongly suspect that the negative feedback issues aren’t the same. I doubt that the people who were angry spent their time criticizing other people’s suggestions and making everyone angry over that sort of criticism. That’s the sort of thing that will make brainstorming fail, as people won’t suggest ideas that are really out there because they’re afraid of being laughed at or ridiculed, and no one likes that. You get the most creativity from brainstorming when you can toss out any idea, no matter how dumb it might sound. You figure out later which are really dumb. So, what we likely had was people who were what I’ll call “ramped up” with anger who then did brainstorming mostly normally, and creativity increased. So what’s the explanation for that:

Why does anger have this effect on the imagination? I think the answer is still unclear – we’re only beginning to understand how moods influence cognition. But my own sense is that anger is deeply stimulating and energizing. It’s a burst of adrenaline that allows us to dig a little deeper, to get beyond the usual superficial free-associations.

I can buy this, to some extent. That was why I used the term “ramped up”. Adrenaline is running and people are getting a lot of energy generated by the anger. They need to burn it off, and so there’s a lot of energy available to be creative. But all you need to do, then, is generate that energy. Anger is one way to do it, but enthusiasm should work as well. If you make people be enthusiastic or excited about it, it should work the same way … right down to it running out quickly and be tiring.

The post moves on to talking about other negative moods and their impacts:

Consider a recent paper, “The Dark Side of Creativity,” led by Modupe Akinola. The setup was very clever: she asked subjects to give a short speech about their dream job. The students were randomly assigned to either a positive or negative feedback condition, in which their speech was greeted with smiles and vertical nods (positive) or frowns and horizontal shakes (negative). After the speech was over, the subjects were given glue, paper and colored felt and told to make a collage using the materials. Professional artists then evaluated each collage according to various metrics of creativity.

Not surprisingly, the feedback impacted the mood of the subjects: Those who received smiles during their speeches reported feeling better than before, while frowns had the opposite effect. What’s interesting is what happened next: Subjects in the negative feedback condition created much prettier collages. Their angst led to better art.

Well, I’d be hesitant to base anything on a judgement of “prettier” or “metrics of creativity”. What the heck are those anyway, and how do we know that they don’t themselves inherently build in things that would be produced in a worse mood? And how sure are we that it’s focus that makes the improvement, if there’s one there? So this is fairly sketchy, at least when it gets down to reasons for it.

Anyway, I think that people work best when they’re engaged in what they’re doing, which means that they’re enjoying it. Negative moods, in theory, affect enjoyment, but they can invoke similar bodily reactions and similar “passions” to genuine enthusiasm or enjoyment. But much more work needs to be done before we can conclude that anger is a good way to generate creativity. We’d need, at least, to see what downsides there are, and my guess is that those downsides will be quite substantial, as will the effects of other negative moods. But that’s something that can be tested.

There was a new Mr. Deity out about Euthyphro’s dilemma. You can find a link to it at Pharyngula. I don’t normally watch any of them — I haven’t found them to say anything new, and this one is no exception, and don’t find the scripts all that funny — but this one I watched a little bit of. And I want to address some of the initial comments based on Vox Day’s “computer game” analogy, which might give people a new understanding of the various sides. And so I’ll start with the horn of “Things are right because God says they’re right”.

Before getting to the game analogy, let me address the first and most obvious “rebuttal” of at least that horn: “But, if you hold that, then that means that if God said to go out and kill all those cute kittens in that box, that would be morally right!”. Is that actually a rebuttal? What happens if the proponent of “Things are right because God says they are” simply says “Yes, it would”, unvarnished by “But God never would” or something like that? I suspect that most of those raising that objection would pull a Sam Harris and storm off in a huff, swearing that that person was, in fact, simply immoral. And, in fact, that’s usually what they do (I’ve seen it happen when someone merely suggested that genocide might be moral). But, at that point, you wouldn’t be making any kind of rational argument. They are, in fact, completely allowed to accept the consequences of their beliefs and, in fact, bite the bullet. And if they do that, then you can’t just declare victory unless you’ve gotten a real refutation somewhere, by forcing them to accept a proven contradiction. So that question, in and of itself, can’t refute that horn of the dilemma.

So a better way to go is to argue that if they accept that, then their morality is, in fact arbitrary and, more importantly, can’t be objective. And if they don’t get an objective morality from God’s statements, then they don’t have the morality they want, and they can’t actually be moral in the right way. Ultimately, they’d be as relativist as anyone else, it would just be relative to God as opposed to society or to themselves. What they want, then, is to get their moral rules to be as objective as scientific ones, at the very least. And the argument would go that they don’t have it.

So, let’s introduce the computer game analogy, and our computer game designer. But let’s not start with morality. Let’s start with science. Imagine that we’re in the world of “Shadow Hearts: Covenant”. This is a world where, in fact, magic works. Where you can combine cloning with an ancient spell to speed time up for someone to revive his lost love.. Where time travel is possible. Where naturalism is false because there are supernatural things but where both magic and science work and follow set rules. All of this set-up by the game designer, who set up the rules in advance and forces the rules to work that way.

It should be clear that if we actually lived in that world that the rules of that world from our perspective would be much different from the rules we have in this world. But there’d still be rules. And we could do science and other ways of getting knowledge to get those rules, and understand them. The rules we discover, then, would be just as objective as the rules we discover here. Science and magic would have, then, objective and objectively discoverable rules, just as at least science does here. Why? Because that’s how the game designer wrote the rules; to be, for the beings in the world, objective and discoverable. To deny that these are objective is to deny that our rules are objective as well. And surely we don’t want that.

So, now, note that there is a morality in that world, with good and evil and shades in between. And it could very much be said to be created by our game designer. The game designer has determined what counts as good, what counts as evil, and what’s dubious. What’s moral is then, in fact, simply the rules that have been defined by the game designer to be moral, in precisely the same way as the laws of nature and laws of supernature have been defined. And to the people in the world, the laws of nature and supernature are as objective as they are to us (or would be, if you want to nitpick about supernature). So, then, if those laws are not arbitrary, in what sense are the laws of morality arbitrary? They’re the exact same thing, created in the exact same way. And you can’t appeal to “whim” to make it arbitrary, since there is no reason to think that the game designer — or God — change the rules once they created them.

Now, for the other horn, because the video also comments that if moral rules are moral independently of God saying it, then what do we need God for? And the answer is: Need? No. But we must presume in the analogy that if there are moral rules the game designer knows how they apply to the game. The game designer knows all the relevant considerations the characters must make, and so knows what every choice and every actual relevant moral rule to the game world is. And if that’s the case, and we thought we had a direct line to the game designer … why wouldn’t we take advantage of that to learn what the rules are, especially since we may not have it all figured out yet (as we’d be capable of learning them, but that doesn’t mean that we’d have them).

Thus, Euthyphro’s dilemma seems fairly weak. Either horn leads to not unreasonable positions. Thus, we need to figure out which — if either — is correct. The dilemma doesn’t produce an inescapable paradox, but is merely an interesting question to help us identify the two positions.

Sam Harris decided to argue for a very radical idea on taxing the rich and ran into some comments from people who seemed to him, at least, to be Rand inspired egoists. He decided to reply to them and made a colossal error in doing so:

The result was Objectivism—a view that makes a religious fetish of selfishness and disposes of altruism and compassion as character flaws. If nothing else, this approach to ethics was a triumph of marketing, as Objectivism is basically autism rebranded.

Now, I’m not going to comment — at least not yet — on taxes and the “rich”. Also, others at Pharyngula have taken on how Harris is basically exploiting an actual mental disorder to make his point, and also how autistics don’t seem to, at least, act as if they are incapable of altruism and compassion. What I’m going to do is focus on the moral aspects here, and how even that’s inaccurate.

As seen in the essay here, people like Heidi Maibom argue that autistics tend to act like Kantians — follow the rules without exceptions — than Humeans — let emotions guide your morality — when it comes to morality. From this, you might get the sense that autistics lack empathy and even, in some cases, compassion. And they at least lack empathy. But that isn’t what Objectivists hold. And you might also argue that autistics have a tendency to be self-absorbed, in the sense that they can be unaware of the world or those that exist outside of their direct experience. But that’s not even “self-interested” as per the Objectivist view, let alone “selfish”.

Objectivists are not Kantians and are not Humeans. They are, in fact, Hobbesian Egoists, arguing that in some sense humans have to — or ought to, at least — act only or primarily in their own self-interest. Thus, they deliberately choose their own interests over those of others, and calculate every interaction on the basis of how it benefits them. (Which, it strikes me, is how a lot of “well-being” advocates argue as well). That’s selfish.

Autistics, on the other hand, act on the basis of rules. They are incapable of acting on natural empathy, and so have to gain any empathy they have from following a set of rules. Even the exceptions, then, have to be rules. But as seen in the essay I list above, autistics know very well that they have to act properly in a social world, which means having rules for acting on empathy and out of compassion and altruistically when appropriate. They are not, in fact, Egoists. They can’t be; they wouldn’t fit in with everyone else if they were, assuming that others are not Egoists as well.

The funny thing here is that this would turn on a risk of equivocating about what “selfish” means. Are autistics self-centered enough to be called “selfish”? Well, not in the sense Harris needs. But since he has an undergraduate degree in Philosophy and is talking about morality, I’m going to presume that he’s come across Egoism and Hobbes, and if he’d come across that he’d have come across the basic bone of contention in Hobbes: that Hobbes defines “selfish” so broadly when he argues that everyone is inherently selfish that it doesn’t and can’t carry the negative and strong connotation that it does. If I am the sort of person who is made happy by helping others, Hobbes would consider that to be selfish, while most others would consider that admirable.

Harris, in one short sentence, commits that gaff. Or, at least, he commits the gaff of misunderstanding what the autistic and the Objectivist moral codes actually are. Objectivism cannot be autism rebranded because it is Hobbesian Egoism morally, while autism is Kantian. Kantians would not make the same arguments that the Objectivists would, and thus they are not the same thing. At all. And Harris should have known that.

A few years ago, I took a Linguistics course, and at one point we talked about the words “ought” and “should”, and about whether they had the same meaning. There was a comment that in some culture — I forget which — they couldn’t be used interchangeably, and I also commented that in philosophy they weren’t same word either, which the professor agreed with … mostly because “ought” had a very specific and technical meaning in philosophy.

So, fast forward to moral debates. It’s become increasingly common to see arguments like:

1) People say that you cannot get an ought from an is.
2) But I can use an “is” proposition to get to “You should X”.
C) You can get an ought from an is.

And this would work if, in fact, should and ought were the same thing. But they aren’t. While in general you can use the two mostly interchangeably, you have to be very careful when you do so because should is actually a lot weaker than oughts are intended to be. When we make a normative claim — like about morality — and say “You ought to do X”, we don’t really mean that you really should do X but, hey, if you don’t want to that’s okay. “You ought to do X” is not a suggestion about what you should do to achieve a goal. It’s an actual moral command, and so is far closer to a “must” than a “should”. Essentially, if a morality says “You ought to do X” they mean that if you don’t do X, you aren’t being moral. It’s not optional.

I think that Sam Harris makes this mistake, especially in his comparison to health. We don’t tend to use the strong “ought” when we talk about health. We say things like “To be healthy, you should drink two glasses of wine a day” or “You should avoid red meat” or “You should avoid fatty foods”, but we know that if someone doesn’t follow these suggestions they may still be healthy, and if they do they might end up unhealthy as well. These aren’t musts, but are just suggestions.

The other difference is what mandates the difference: health is mostly an instrumental value, and not an end in itself. We want to be healthy because it lets us do other things, like go out in the world or not have the unpleasantness of pain, but we don’t want to be heathy just to be healthy (generally). The same thing applies to wealth; in general, we want money to get things to increase our happiness, but we don’t — or at least shouldn’t — want money for the sake of having money. But morality isn’t that way. Being moral shouldn’t be something that you have to justify by appealing to another value, but instead should be something that’s an end in itself, and not something that’s merely instrumental. Or, at least I argue that it shouldn’t be that sort of thing.

So that’s part of the difference. There’s always going to be a case where someone will sacrifice their health for their own pleasure, because health is only of interest if it provides pleasures or abilities. That means that recommendations for health always are suggestions, not commands; ultimately, the value of following a suggestion about your health is determined by whether the loss of instrumental value to get that healthy state is overcome by the increase in instrumental value from being healthy. Science is the same way. Science is not an end in itself at all, but only has instrumental value in that it provides either knowledge or useful tools to increase well-being or whatever. So the scientific method, then, is a suggestion. If you think that what science will give you is worth any loss of instrumental value from using it, you’ll use it. Otherwise you won’t. Science is different from health in that in general if you use the scientific method you don’t actually lose anything, and the benefits are almost always worth it. But it’s still instrumental and still, then, only shoulds, not oughts.

Morality and anything normative is not that. Being moral can and ought to be an end in itself, even though it can have instrumental value. Morality and knowledge have intrinsic value, while still at least potentially being instrumentally valuable. But you can’t appeal to the instrumental value — ie that it increases happiness — to justify them and why you should seek to be moral or to gain knowledge like you can with science and health. That’s completely misunderstanding what they are, and ignoring that they can be valuable even if they don’t have any instrumental value.

We need to treat the normative as ends, not as means. Once we do that, we can see why normative principles are closer to musts than shoulds, and can understand why, when it comes to the normative, “ought” is not “should.”

September is coming up, and that means the start of classes. I’m going to be taking classes again, and so that means a change in my posting schedule. For a while now, I was trying to make a post per day Mon – Thurs, with Fri – Sun being posts if I got around to it. That doesn’t work for me this term. So, what I’m going to do is make at least a post per day Sat – Wed, a post if I feel like it on Friday, and nothing on Thursday.

So, if this works, the days will change but the blog will update one more day per week. And with my taking classes there should be more content based on the class I’m taking.

While I don’t have to, I’ll try to start this new schedule … tomorrow. We’ll see how it works.