Critique of “The God Delusion”, part 1.

Introduction:

In one of the quotes used to promote the book in the paperback edition, Penn and Teller comment: “If this book doesn’t change the world, we’re all screwed.” I’d put it quite another way: if this book DOES change the world, we’re all screwed. This is because this book, by Dawkins’ own words, is aimed at converting theists into atheists. Dawkins says: “If this book works as I intend, religious readers who open it will be atheists when they put it down” [Dawkins, The God Delusion, pg 28]. I submit – and the main thrust of my reason for writing this critique – that Dawkins’ book does not meet the high standards required to do that; while some might indeed read his book and convert, that says more about their own attitudes than it does about the quality of the arguments he’s presented. Basically, my view is that the arguments he presents are in no way strong enough to support the claim that everyone should be an atheist, since his arguments aren’t strong enough to make atheism the only rational position to take. And so my critique will be aimed directly at demonstrating that. I am not going to claim that Dawkins is WRONG or that a God does exist, just that anyone who still believes in God even after reading his book is perfectly reasonable in doing so, since his arguments are not strong enough to demonstrate that no one should believe in God.

A brief – but critical – digression here, into a discussion of terms. In general, in terms of the current debate over the existence of God certain terms have come to have technical meanings that don’t necessarily align with what most people think of when they hear the terms. So, atheist has come to refer to ANYONE who lacks a belief in God, even if they don’t believe in lack of God (that God does not exist), whereas in the general public atheist generally means someone who believes that God does not exist and agnostic refers to someone who has not yet taken a side. Agnostic, in the technical sense, simply means someone who does not know whether or not God exists, regardless of whether or not they in fact believe that He exists. The distinction between those who have not taken a position and those who have taken the position that God does not exist still exists, but is referred to as “weak atheism” and “strong atheism” respectively. Dawkins, for the most part, does indeed seem to use and stick to these definitions, so it’s important to outline them before proceeding with the discussion.

So this raises an interesting question for anyone who is reading “The God Delusion” to consider: is Dawkins a strong atheist or a weak atheist? Note that the positions do have differing consequences: strong atheism is clearly a claim about the world and clearly is itself a belief, while weak atheism probably isn’t. By that, strong atheism – as a belief – can directly influence and justify action, while weak atheism probably can’t. Strong atheism has a burden of proof – as a claim about the world – while weak atheism probably doesn’t. Determining which sort of atheist Dawkins is important for determining just what claims he can make about what his position holds and does not hold him to, and is interesting besides.

(Note that later Dawkins outlines something like seven positions on a scale where someone can fall on the issue, which I’ll address at that point. For now, let’s just keep this simple, with strong versus weak atheism and agnosticism.)

Now, to be fair, I should outline where _I_ fall on this scale. I’m an agnostic theist. I believe – I’d even say that that belief approaches knowledge, if it isn’t knowledge – that the God of the Abrahamic religion (the focus of even Dawkins’ book, so it’s fair to simply refer to them without worrying directly about all possible gods, for the most part) has qualities such that we can never know if such an entity exists or does not exist. How do you prove than an entity knows everything that can be known? Not knowing even one knowable thing – no matter how minor – would mean that the entity was not omniscient. So if an entity had proven that it knew the cure of cancer but didn’t know that VI is 6 in Roman numerals, it isn’t omniscient; how do you empirically show that an entity knows all facts, great and small? The same thing applies to omnipotence. So it doesn’t appear that even if an entity showed up and claimed “Yes, I’m God” that we could ever prove that it had those qualities.

In short, for me even IF God came down and performed miracles, I don’t think I’d be able to say that I know that God exists.

Now, there have been a number of arguments raised both for and against the existence of God, which Dawkins discusses in his book and I’ll address in more detail later. But I’ll note, for now, that they fall into two main categories: purely logical and rationalistic proofs (that do not rely on any empirical experience at all) and proofs that purport to show that how the world itself is proves or disproves the existence of God. In the first category, we have proofs such as the “greatest possible being” argument, and all of these have been shown to fail because even if the arguments are valid they haven’t been shown to relate to the real world at all; it is not clear that accepting the arguments forces us to accept the real existence or lack of existence of God (to be fair, there are more proofs for God through this method than disproofs). So the first category seems to be the wrong sort of proof for God. The second category starts out far more promising, and includes probably the two best arguments for or against God: the argument from design and the Problem of Evil. The issue with these is while they, if true, would indeed reflect the real world and have real meaning to the world we live in, establishing the truths of their arguments has been of limited success. It’s far too easy to attack their main premise and show that it doesn’t HAVE to be true, and that we can reasonably doubt their arguments. For the Problem of Evil, it’s too easy to show that the evil that God is supposedly allowing need not be considered evil, or evil for God to allow, that even random suffering may have a higher purpose and that the evil that humans do may be a requirement for free will. For the argument from design, a large part of “The God Delusion” is aimed precisely at showing that our universe doesn’t have to be designed. And these arguments are far too vulnerable to doubt; as soon as we can doubt their arguments, they devolve to “it could be this way” which takes us far out of the realm of knowledge, even though that doubt in no way proves the converse. So casting doubt on the Problem of Evil does not prove that God exists; casting doubt on the argument from design does not prove that God does not exist.

To that end, if you are reading any of my replies to Dawkins in this critique and are tempted to exclaim “But, but … that doesn’t prove that God exists!”, congratulations. You are halfway there. All you need to do is take the next step and realize that since I don’t think God CAN be proven to exist I’m not TRYING to prove that God exists. If anyone comes out of reading my critique convinced that I’ve proven that God exists, I can assure you that that conclusion is through no fault of my own or through my arguments, since they aren’t strong enough or intended to be strong enough to prove that.

Yet, I believe that God exists, which makes me a theist. So in this discussion, the underlying details of what it means to believe, what belief is for, and when it is justified will come up; in short, we are going to be discussing epistemology. I won’t go into anything in detail here, but note that, to me, being a strong atheist, a weak atheist and a theist are all at the same level of rationality, belief-wise. Ultimately, one’s other beliefs and principles will ultimately determine where someone ends up on this scale, and that’s okay as long as they’re consistent about it. But when we are justified in believing will come up again.

So, let me leave you with some questions to ask while reading “The God Delusion”:

For atheists: If you weren’t already an atheist, would you find the arguments overwhelmingly convincing? In short, if you put aside the beliefs that lean you towards atheism, would you find his arguments so strong as to convince you anyway?

For theists: Is it possible that Dawkins could be right? If you accept that Dawkins could be right – as I insist is indeed the case – does that really matter that much to your belief?

For all: Reading what Dawkins says carefully, is it possible that his stance on this matter – though not “supernatural” – maintains a lot of the bad qualities of religion that he is opposing? In short, is he a deeply and irrationally religious naturalist?

Before getting into the critique proper, I want to highlight three egregious errors that Dawkins has made to bring them out into the open so that we don’t trip over them later. What does it take for an error to be egregious? Well, obviously it isn’t that they’re just wrong; I’m sure I’ll point out some other things he gets wrong as we go along. It also isn’t just that they’re fundamentally wrong; I’m sure I’ll find some more of those as we go along as well. It’s that they’re fundamentally incorrect interpretations or statements and are wrong in such a way that they make his arguments far more credible than they would be if he had interpreted them correctly. I don’t intend to imply that he’s misinterpreting them deliberately, but the effect is the same: his arguments look far better than they would if he’d gotten these things right.

**NEW** I’m going to update one thing here: for 2) and 3), he’s in good company. 2) might not be an egregious error because it seems to be a fairly common stance these days. I still think it wrong, and do think it important to note, so I’m leaving it in, but it isn’t as bad as I made it sound here. **NEW**

1) Dualism.

Dawkins gives a description of dualism as follows: “Dualists readily interpret mental illness as ‘possession by devils’ … Dualists personify inanimate objects at the slightest opportunity, seeing spirits and demons even in waterfalls and clouds.” [pg 209]. Oh, those wacky dualists, believing that the trees have spirits and minds and all of that nonsense. Surely we can all see how bad and wrong dualism is, right? Well, if that was what dualism really did entail, certainly. But that’s not what dualism entails.

The most ironic thing here is that Dawkins, at the beginning of the paragraph that contains the above quote, gets the definition right: “A dualist acknowledges a fundamental distinction between matter and mind” [pg 209]. If he’d stopped there, he’d have been okay. But he didn’t; instead, he went on to make all sorts of claims about what that entails that, well, it doesn’t.

The most famous philosophical dualist is probably Rene Descartes. While I know that he thought that humans had minds – and that he held that minds were not physical – I’m quite sure that he didn’t think that trees had minds. Or waterfalls or clouds, just to make that clear. I’m not even certain that he thought that animals had minds. Philosophical dualism limits minds to things that act in some way like they really do, such as us – of course – and possibly animals. There is no thought to giving minds to inanimate objects. So as a categorization of philosophical dualism, it horribly misrepresents it.

Dawkins could be trying to link mind to soul here and then criticize religious views of the soul. Unfortunately, most of the religions he’s trying to criticize don’t think that inanimate objects have souls; again, there’s even debate over whether or not animals have souls. So, again, it doesn’t get there.

Now, animism does think that everything has a mind or soul, but that is generally more properly associated with IDEALISM, which states that there is no matter at all, only the mental. Dualism, obviously, is not an idealist philosophy since it holds that there is matter, but that the mental is not matter. So he simply does not accurately depict dualism at all.

This one is actually not of great importance, since Dawkins doesn’t use it to make any really important point (he doesn’t really think that dualism is the psychological root of religion). But it makes his links to the roots of religion and why, if that was true, we should abandon religion seem far stronger than they really are. Dualism is in no way as ridiculous as he makes it seem.

2) Absolutism versus consequentialism.

Dawkins sets up in his discussions of morality a pair of competing theories, and strongly suggests that they are directly competing. He compares the absolutism of religions with the consequentialism of the moral codes he wants to espouse, and asks if we wouldn’t rather be consequentialist than absolutist.

Absolutism is the idea that there is a set of absolute and objective moral standards that everyone should follow if they want to act properly moral. It is opposed by relativism, which states that what is moral depends greatly on what the individual thinks is moral, to the extreme case that you can only say what you think is immoral as your own opinion, having no bearing on what the other person thinks or should think is immoral. Consequentialism is the idea that only the consequences of your actions matter in determining whether or not an action is immoral. It is opposed by intentionalism, which states that what you meant to do determines whether or not the action is immoral, whether or not what you intended to have happen really happens.

So you can have an absolutist consequentialism moral code: one that claims that there is a set of absolute moral rules where morality is determined by what actually happens as a result of your actions. In fact, I’d argue that Utiliarianism is an absolutist consequentialist moral code: it states that the right moral action is the one that has the greatest utility, which basically means the one that promotes the most happiness while adding the least suffering. Utility – what suffering and happiness mean – are meant to be objective and mean the same thing – basically – for everyone. So, absolutist. And what actually happens matters more than intentions, so consequentialist.

Actually, it’s hard to imagine any moral code that purports to be objective that wouldn’t be absolutist. Subjective moral codes tend towards relativism. But I digress.

Why is this egregious? Well, think about how people would react if Dawkins had said that he was opposing absolutist moralities in general and thus proposing that our moralities be relativist; in short, that one can only be expressing an opinion when one says “Slavery is immoral”. This would certainly lead more people to thinking – even if religious moralities aren’t correct – that absolutism is the right sort of moral code. And think about what the reaction would be if people knew that the opposition to consequentialism is “What you mean to do matters more than what happens”. This would certainly get people thinking about all the hard questions like “What if I – by accident – turn off someone’s furnace and cause their pipes to burst in the winter? Is that, itself, immoral, and as immoral as if I had done so deliberately?” Which would make consequentialism seem like far less of a good deal, and so weaken his case for why consequentialism is the right moral philosophy to follow.

In short, if Dawkins had actually accurately represented the moral arguments, people might have found them less clear cut, just as has been found in philosophy for hundreds if not thousands of years. Funny, that …

At any rate, in his attempts to show that religions have a bankrupt morality, he misleads by not giving the debates their proper due. That’s egregious, and critical to much of his book.

3) Reciprocal altruism …

… isn’t. Altruism, that is.

Throughout the chapter on “The Roots of Morality”, Dawkins is trying to justify the major “altruisms” as being derivable from his view of Darwinian natural selection. He tosses this out as one of the altruisms that can be explained by it. He describes it himself as “You scratch my back, I’ll scratch yours” [pg 247] and says it’s a “ … main type of altruism that we have a well-worked-out Darwinian rationale [for] … “ [pg 247]. The problem is that this isn’t a form of altruism at all.

There are multiple ways that altruism is talked about. Some say that in order for an action to be considered altruistic, it cannot benefit the person taking the action at all. Some say that it must actually be them giving something up so that they end up worse off than they did before. Some allow for some benefit, but where that line gets drawn differs from theory to theory. But from all of this, one thing is clear: altruism, at its heart, means that the action is taken without regard for the benefit that you might get from that.

Reciprocal altruism is not that sort of action, and so is not altruism at all. The idea is simply that you give something to someone else PRECISELY because you’ll get a benefit from it if you do. That’s not altruism. Sure, we can explain this sort of “altruism” and see how it happens in every day life, but it isn’t any sort of an altruism at all. The benefit is front and centre in the decision made; you do it simply because you will benefit from doing so.

Dawkins can try to claim that he’s talking about altruistic actions that we DON’T think that about, but that they benefit us anyway, but it’s hard to see how he can have an explanation that works for cases where we make the decision without thinking about the benefit from his examples. He’d have to be referring to a subconscious notion of “scratch my back and I’ll scratch yours”, but since nothing in evolution can force others to do what you want there doesn’t seem to be the evolutionary benefit that he needs to make his case. In short, while someone may develop a subconscious desire to give things away because they might get things back, only if everyone else does as well could this survive. This would then still require an explanation.

This is problematic because this misunderstanding about “altruism” may undercut all of his discussions on altruism; it may well be the case that in all of the cases he’s cited the benefit is, for most people, front and centre and so there’s no altruism to explain. But Dawkins desperately wants to be able to explain altruism, because one of the main objections to Darwinian explanations about our behaviour is that it can’t account for altruism. Dawkins could take the route of Hobbes and insist that we aren’t really altruistic, but while he does that in effect here he won’t come right out and say it; instead, he tries to redefine altruism to include cases where we are clearly acting deliberately for our own benefit.

Like this:

LikeLoading...

9 Responses to “Critique of “The God Delusion”, part 1.”

If altruism is dependent on the ignorance of possible benefits on the part of the altruist, then the following two models rescue Dawkins.

One:
If you had two population, one with real altruism as you define it (population A), and one with out (population NA), for the moment not considering whence that altruism came; and this populations compete for the same resources, then any advantage bestowed by the populations altruism will result in a net benefit for each individual, even if actions are taken that are not beneficial on their own.

Second:
Reciprocal altruism works even if the altruists don’t know why it is they are altruistic, which is Dawkin’s point. If behaviour evolved such that doing something for someone will increase social bonding (which is indeed the case – if you loan people money, you will trust them more, which is counter-intuitive on its face), you will get reciprocal altruism no matter the ignorance of the individuals. The reason for the altruism is a net benefit for the individuals, but the individual is neither consciously nor subconsciously aware of that. The individual acts without expecting benefit. It is a part of the make-up of its behaviour. To accept this you have to accept that behaviour is not entirely free, that will is not unrestricted, but I think you do that, judging by what you said about determinism.

The problem with these models is the classic one: what about cheaters, meaning people who take from the altruists but aren’t altruistic back? On the evolutionary scale, they’d get more resources and so would benefit, but pretty much every model that could save that relies in one way or another consciously refusing to help people who won’t help you back (or whom you don’t trust to help you back). That consideration, then, isn’t altruistic, and so it’s still not actual altruism.

I recall some article somewhere where I’d argue that a mechanism like the one you suggest in the first model could work (but can’t remember the details), but that was dependent on people not co-operating at all. Following Hobbes, we can move from a state of nature to a social society simply by recognizing our own interests and then agreeing on that basis, which looks a lot like what we have. Then altruistic people can survive — since they will get repaid by the social interest even if they do not evaluate it on that basis — and our cheater-detection mechanisms would class them as more trustworthy. But those people would not be using reciprocal altruism, and would be benefiting from people who are not truly altruistic but do believe — consciously — in quid pro quo.

Well, if cheaters are prolific and permeate a herd, it will presumably destablize it. The same argument could be made for killing, where killing is a net benefit for the individual, but too many killers in a group and the group becomes unstable (or extinct). The same for rape; there is no evolutionary benefit if there are so many rapists that there is no reproductive advantage in raping anymore.

You could view this akin to self-balancing processes in nature in general, such as the predator/predatee-balance.

Anyway, I have a more fundamental problem and I don’t understand how your post solves it: By your definition, if we live in a world in which it is advantagous to act from time to time with no conscious self-interest, then there is no altruism. Consider a subject that does something truly altruistic. Yet as it acts in a context that rewards such acts in some way, it has a benefit, and thus its behaviour is not, in fact, altruistic. I am of the opinion that we live in such a world, but in principle that has to be demonstrated, of course, for any pragmatic assessment.

Altruism decoupled from the intent of the possible altruist in such a world is logically impossible. That is why I define altruistic behaviour as behaviour that is not consciously self-serving. I see no other way to have a meaningful use for the word.

Ah, but as I said that issue only occurs if cheaters can’t get together to co-operate using self-interest. And that’s where Hobbes comes in; they agree to co-operate because they are consciously aware that co-operating benefits them. That, however, would not be altruism.

As for the second point, I’m actually pretty sure that my definition focuses more on conscious benefit than on anything else. Some definitions do say that you can’t benefit at all, but I don’t really like that one. But even using conscious intent causes problems for Dawkins, as I argued: “But from all of this, one thing is clear: altruism, at its heart, means that the action is taken without regard for the benefit that you might get from that.”

It is funny how we do not really disagree, but still somehow disagree.
I will use the following terms: If I am talking about “real” altruism as I understand your use of the term, I will use “altruism” and other forms like “altruist” or “altruistic”. If I am using “altruism” in a sense that is not yet shown to be altruistic, I will add “p-” for “possibly-” as a prefix.

I don’t get how reciprocal altruism then is not altruism if the fact of reciprocity of it is not available the conscious understanding of the acting person? Do you argue that, in reciprocal scenarios, the actors /know/ that there is reciprocal altruism at work, i.e. they can expect similar acts back?

I think that is just an assumption. It could well be the case that altruistic acts begat altruistic acts, and that beings, especially humans, have figured this out ex-post-facto, and debate about it, and sometimes perhaps even cunningly act on this information; but in the moment of engaging in p-altruism, this is not necessarily consciously present. It is an attribute of society (or herd behaviour), not part of the conscious motivators for the act.

Sidenote: You could argue that there is always the wish of helping someone else, and as the wish is fulfilled by the p-altruistic act, this isn’t real altruism, but then there is no altruism whatever or, alternatively, an act that is caused by nothing, not even the free desire and wishes you postulated in our brief discussion on determinism, entirely random.

So, to clarify: If someone helped someone else solely for the reason that he or she wishes to help, is this altruism? Regardless of whether this act will in fact have benefits the helper simply did not think about or has implicit benefits that stem from the act itself?

In my model, it is, and I my understanding of whether it is in your model alternated wildly throughout this thread, that I now, as you might have noticed, managed to continue by for once clicking “reply” :).

I will not ask further, because if I fail to understand your contention after this, I am clearly unable to do so, in which case I will return at a later date with hopes of an epiphany.

We both agree that unconscious reciprocal altruism at least probably is reciprocal altruism.

My argument is that we have a reasonably plausible evolutionary explanation for conscious reciprocal altruism, but unconscious reciprocal altruism breaks down when you introduce the possibility of cheaters. So we’d need an explanation of how unconscious reciprocal altruism can survive wrt evolution. And the “altruistic society” is a good one, but doesn’t quite work because of cheaters, as I said, and the fact that self-interest can motivate the precise same social behaviours without being vulnerable to cheating.

I see. This is a first attempt to construct a model for cheat detection. It is likely not yet very good.

Cheating is relatively complicated behaviour. To implement it in thought you require a theory of mind, and specifically second-order reasoning, that is, the understanding that another has a model of the world that does not necessarily reflect its true state or what you yourself consider to be its true state, and the ability to reason about what the other person thinks about your model of the state of the world. Theory of mind is a primary model in cognitive science and to a degree in linguistics, which it overlaps, in case you are interested.

If you reach the stage of a theory of mind, and the ability to consciously cheat, then this same state also grants the ability to understand and counteract cheating.

So I think what needs to be discussed is instead subconscious behavioural explanations, since we so far put altruism into the category of such explicitly. What you then need are subconscious behavioural patterns that detect and “punish” cheating. To do that, you can use a similar system as avoidance behaviour uses; colloquially we could call that learned fear. If an animal* burns itself, for example, it will learn to avoid doing this because pain negatively impacts the animal, which is communicated specifically with hormones. If you for example have a state in which, say, grooming a fellow ape raises your level of happiness, and not being groomed lowers it in turn, then a pattern of reward-punishment (grooming, but not being groomed in return) versus reward-reward (grooming and being groomed in return) can lead to a naturalistic anti-cheating system.

The problem with that is mainly, as I see it, that you need the cheater-detection first in a species that quickly reproduces or in which there is a high selection pressure, and I have so far argued for a system that arises after altruistic behaviour comes into being.

However, I’d argue that such a system can develop on its own for a different purpose and be co-opted. I need to think a bit more about the purpose of such a system. My first hunch of a system that rewards finding edible things (eat-sated versus eat-vomit) which is then highjacked to model behavioural expectations doesn’t “feel” right, but I can’t say why at the moment. The only way to argue about the reality of my proposed model would be empirical science anyway, so this is only a thought experiment at the moment. Perhaps I will see if I can find research that might have been done about that.

An alternative solution of course is a state of the world in which there is no large selection pressure, and thus time to develop such a detection system; and apes have a relatively long reproductive cycle anyway. This presupposes that only apes are capable of altruism, which I don’t know to be true.

Good night for now.

* I think that is at least true for mammals and some species of reptile (birds)

The problem with this is that it breaks down the instant we can have conscious thought and decision making. The conscious cheater-detection mechanisms almost certainly have to predispose one to actual conscious reciprocal altruism, where you help those who can or will help you back, to avoid being a sucker. So, if a conscious altruism survives at all, it’d have to piggyback on the social rules that are explicitly not altruistic.

At this point, it might just be easier to go along with Hobbes and say that actual altruism doesn’t exist at all, since humans, for better or for worse, act consciously.

I’ve got to say that I think I was a bit brusque here, and that “unconscious reciprocal altruism” might be a bit more complicated than we originally thought.

So, on that, when I used that term I was using it to mean something more like “emulated reciprocal altruism”, which is where the motivation — taking the conscious one — is altruistic but in practice if everyone acts altruistically that emuates a society where everyone consciously acts on reciprocal altruism, since altruistic people helping others always means that if I help someone, someone will help me if we just act naturally, even though we neither consciously nor subconsciously consider that help when deciding what to do.

However, subconscious reciprocal altruism would include cases where your conscious motivations aren’t your actual motivations, and your real motivations are subconscious and are based on benefit. If, for example, someone thought that they acted altruistically but we discovered that they only acted altruistically in cases where there is explicit benefit, we might reasonably question whether that person is actually being altruistic or not. If we subconsciously filter on quid pro quo, it’d be hard to see how we could really be altruistic at all. It was my comment on Hobbes that reminded me of this, since he’s explicit that those sorts of unconscious motivations are not altruistic, and I’d agree with him on that, I think.

So, if the motives are aimed at reciprocal altruism — either consciously or unconsciously — then I’d say that that’s not really altruism. Since we know that reciprocal altruism can be explained evolutionarily but that strict altruism at least currently cannot, emulated reciprocal altruism is the best bet for explaining altruism (if we want to claim that we are, in fact, altruistic; Hobbes denies this). But emulated reciprocal altruism, I’d argue, breaks down when we introduce cheaters, who take advantage of altruisitic tendencies to gain at the expense of others. Either the altruistic society breaks down once cheaters gain enough of a threshold, or cheater detection mechnisms appear which, it seems to me, can only defeat cheaters — even in your example — by implementing conscious or unconscious reciprocal altruism: you define who you help by how likely you think it is that they’ll help you. At this point, we’re at least drifting away from altruism towards explicit reciprocal altruism. And introducing benefits to helping won’t defeat cheaters who don’t have those mechanisms — ie people who don’t feel good when they help people — and akes us away from altruism in another way by making it likely that we help not just to help, but to get the “feeling good” benefit, which Hobbes if I recall correctly makes hay with as well.

So, we don’t have a nice way to explain altruism wrt evolution, but following on Hobbes there could be a way out by denying that it exists at all, or at anything other than a maladaptive trait that happens to survive, like nearsightedness.