Defending Scientism

Post navigation

Moral realism is the doctrine that there are “moral facts”. Moral facts are declarations of what is or is not moral (“Stealing is morally wrong”) or what we ought or ought not do (“We ought to abolish the death penalty”). In order to be “facts”, these statement have to describe objective features of the world, and so be independent of subjective human opinion on the matter. In order to be “moral” facts (as opposed to other sorts of facts), they need to declare what, morally, we ought to do or not do.

I’m an anti-realist. As I see it, the only form of “oughtness” that actually exists, is instrumental oughtness. That is, statements of the form “If you want to attain Y, you ought to do X”. Such statements, termed hypothetical imperatives by Kant, can be objectively true descriptions of how things are. The statement “If you want to attain Y, then you ought to do X” can be re-phrased as “Doing X will attain Y”, which can indeed be a true fact about the world.

However, the oughtness, the conclusion “I ought to do X”, rests on wanting Y. And wanting Y is a human value or desire, and so is subjective. Hence, hypothetical imperatives do not amount to objective “ought” prescriptions. Thus hypothetical imperatives are generally not regarded as “moral facts” of the sort needed to establish moral realism. (Indeed, after discussing “hypothetical imperatives”, Kant then went on to try to establish “categorical imperatives” for that reason.)

Everyone who considers this topic accepts the existence of the “instrumental oughts” decreed by hypothetical imperatives, and yet only half of moral philosophers are moral realists. Moral realism is generally held to be the much stronger notion that there are “moral oughts” that hold objectively, regardless of how we feel about them; things that we “ought to do” regardless of our personal desires.

Or so I thought. But I recently read an article by Richard Carrier, the secularist blogger, author and historian best known for his work on the historicity of Jesus, in which he argues that hypothetical imperatives can indeed be objective moral facts, and thus that moral realism is true.

His argument can be summarised from the premises:

(1) There will be some outcome that John most wants. (2) There will be some action that best attains what John most wants.

… followed by the hypothetical imperative:

(3) In order to attain that outcome, John ought to take that action.

All of the above are objective facts about the world. Carrier then reasons: Given (1), (2) and (3), we have the conclusion:

John ought to take that action. He maintains that this conclusion is also an objective fact about the world, a “moral fact” that establishes moral realism.

That argument depends on treating the English language as a formal logical system, leading to the syllogism:

(1) If A then ought-B; (2) A; therefore ought-B.

But common-usage languages are not formal logical systems. What is the actual content of the statement: “In order to attain that outcome, John ought to take that action”? It surely means (re-phrasing without the word “ought”): “Taking that action will attain that outcome”. Does the version including the word “ought” connote anything additional to that re-phrasing? I don’t see that it does. (And if it does, then what?)

But if it doesn’t then the phrase “… John ought to take that action” cannot be separated from the “In order to attain that outcome …”. The phrase “John ought to do X” is then an incomplete thought, inviting the question “else what?”, in the same way that “taking that action will …” is an incomplete thought. Carrier’s attempt to translate a hypothetical imperative into an objective “ought” seems to me to fail.

If a hypothetical imperative could qualify as a “moral fact” then it would have to be the case that the statement “Doing X will attain Y” could also be a moral fact, since that means the same thing. (Again, if anyone wants to argue that there is more to a hypothetical imperative than that then please elucidate.) But I doubt if philosophers generally would accept that factual statements of the form “doing X will attain Y” are “moral facts”.

Indeed, my criticism of moral realism rests on the basic question: What does “John ought to do X” even mean?

I can translate a hypothetical imperative into a different phrasing, and so I understand what an instrumental ought amounts to, but I don’t understand what an “objective ought” is even supposed to mean. And I’ve never heard a moral realist give a proper explanation; they tend to treat it as intuitively obvious and so don’t ask the question. And yet, if we’re examining the very roots of morality, we need an answer.

I read Carrier’s article since, as an anti-realist, I try to look for good arguments for moral realism. But I don’t find his argument convincing. I do think that his account of morality, as containing nothing more than human values coupled with hypothetical imperatives, is actually the correct one, but it seems to me to be better labelled “anti-realist”.

This illustrates an interesting foible of human psychology. People’s intuitive sense of moral realism is so strong that people feel that there is something badly wrong with an anti-realist conclusion, even when reason leads them that way. They really do want there to be some way in which morality can be labelled “objective”, and they are willing to try hard to construct (faulty) arguments to that end. The better conclusion is in realising that there is nothing wrong with morality being subjective!

The Claim of Scientism can be stated overly crudely as “science is the only way of answering questions”, which of course is guaranteed to raise hackles. But in the non-strawman version scientism does not assert that humanities can never contribute to knowledge, instead it asserts that ways of finding things out are fundamentally the same in all disciplines. Any differences in methods are then merely consequences of the types of evidence that are available, rather than reflecting an actual epistemological division into “different ways of knowing”. The prospect is not, therefore, of a hostile takeover of the humanities, but of a union or conscilience (to use a term that E. O. Wilson revived from Whewell).

In its least offensive statement, scientism states that science is pragmatic, and that it will use any type of evidence that it can get its hands on. The best understanding is produced by combining and synthesizing different approaches, asserting that — since the natural world is a unified whole — different approaches to knowledge must mesh seamlessly and combine constructively.

The remant of a supernova explosion which was recorded in AD 1006 by Chinese, Egyptian and European sky watchers.

As a real example, an astronomer could be studying the visible remnant of a supernova explosion. Knowing the age of the remnant would be crucial for calculating “hard physics” such as the energy of the explosion. So the astronomer would be very interested in sightings of the explosion found in thousand-year-old Chinese records.

But to interpret such records, and accurately date the supernova, one would need to know a lot about Chinese culture of the time, their calendar and how they counted years, how they referred to different positions in the sky, how they interpreted celestial events, and how that was bound up with their lore and religion. In other words, one would need to know a lot about history and culture, which are normally regarded as part of the humanities, not part of the physical sciences.

So would an astronomer start worrying that by using ancient records they were straying outside of science? Could they legitimately use such information, or might the resulting paper get rejected during peer review as being “not science”?

It has been suggested that these markings, made 6000 years ago in India, are a sky map recording an ancient supernova.

To a scientist, any such worry would be absurd. Of course historical records, of all types, are valid information that can be used to calculate the energy of an exploding star; why wouldn’t they be?

Scientists would, obviously, concern themselves with the reliability of the information, just as they do for any scientific information, but it wouldn’t occur to them to worry about any supposed line of demarcation, nor to worry about crossing it. Their whole world view — likely so obvious to them as to be unquestioned — tells them to regard everything as within bounds, all knowledge as within their purview.

Let’s take another example, that of migration patterns of human peoples over thousands of years. Anyone studying our past would use all the information they could get, whether that is “scientific”, “cultural”, “historical” or whatever. This might include archaeology, cultural patterns within archaeological finds such as pottery, geophysical surveys of the landscape, analysis of ancient pollen, genetic analysis of living peoples, genetic analysis of ancient skeletons, analysis of languages and language families, and consideration of historical records and cultural traditions.

Any attempt to create an epistemological divide between “science” and “history” is untenable. On what date in the past does the study of ancient humans stop being “history” (part of the humanities) and start being archaeology (is that a humanity or a science?) or paleontology (definitely a science)?

If the reply is that there is no clear demarcation, but instead a messy transition, then that concedes the point, since within the transitional period all types of evidence would be relevant and valid, and must combine coherently and consistently towards a unified truth about what did happen.

That must be the case, unless you are going to throw out the whole concept of objective truth, and argue that truth is socially constructed, and so declare that you simply don’t care whether or not your cultural history is consistent with the archaeology.

Of course the day-to-day practice of history is very different from that of, say, biochemistry, simply because the types of evidence available are very different and that dictates the style of investigation. The historian cannot adopt the test-tube style of a chemist. But then nor can the astronomer and nor the practitioner of other historical sciences such as geology or paleobiology.

The availability of evidence determines the styles of investigation that are practical and possible, and science, being pragmatic, will adopt whatever methods work in that circumstance — and then attempt to mesh the different approaches into a coherent whole.

A style of literary analysis based on feeding a whole corpus into a computer and counting particular words and phrases is a valid way of studying literature. It doesn’t replace more traditional methods, it complements them. How well such tactics work is something to be carefully assessed, but one shouldn’t reject them a priori while muttering about the over-reach of science. Adding in new methods and styles of investigation can only be a boon; they can only aid us in reaching a better and more complete understanding.

The “unity of knowledge” thesis, in which styles of learning from both the huamnities and the sciences can collaborate constructively, strikes me as both reasonable and conciliatory. A few years ago, though, such a statement by Steven Pinker in the New Republic received a bad-tempered response from non-scientist Leon Wieseltier. As discussed by Russell Blackford in his contribution to this volume, “many humanities scholars will interpret Pinker with alarm”, since they interpret the claim as being that “all problems are solveable through distinctively scientific techniques”, such that “contributions from the humanities — or even from such social-science disciplines as anthropology — are unwelcome or irrelevant”.

But such an interpretation is either a misunderstanding or a strawman, since, as Blackford also states: “it is not obvious who makes such a claim”. While many scientific techniques can contribute to knowledge about human history, culture and other domains that are labelled “the humanities”, none of this, Blackford continues, “goes anywhere near displacing, as opposed to supplementing and assisting, traditional forms of erudition and scholarship”.

In one of the more scientistic essays in the volume, Boudry agrees with the unity-of-knowledge thesis, or more precisely he asserts the commonality of epistemology.

My plumber may be quite adroit in inevstigating a leakage, but I would not ordinarily call him a scientist. […] From an epistemic point of view, however, there are plenty of commonalities between what a biologist is doing in the lab and what the plumber is doing when he trying to locate a leak in my water supply. The plumber is making observations, testing out different hypotheses, using logical inferences, and so on. […] It would certainly be a peculiar usage of language to call my humble plumber a scientist, but then again, it would be strange to think that any point of epistemological interest hinges in withholding that status from him.

Philip Kitcher’s essay is billed as opposing scientism, being a lengthy paean arguing that “history and humanities are also a form of knowledge”, and describing the ways in which the style of enquiry must necessarily adapt to the subject matter. But this is only opposing the strawman version of scientism, not a scientism that anyone advocates. Kitcher himself concludes that: “human enquiry needs a synthesis, in which history and anthropology and literature and art play their parts”, offering “a partnership in which different strengths and styles are acknowledged and appreciated” and where “constructive criticism is given and received”.

A stance that is actually opposed to scientism would reject such a synthesis, and would argue that the natural sciences are irrelevant to the social sciences, the arts and to the humanities. This could arise, for example, if human minds really were a “blank slate” created entirely by culture, with genetics and biology playing no role.

Thus, while scientism argues for a consilience in which the social science and the humanities should look to biology and evolutionary psychology for partnership and two-way constructive criticism, the anti-thesis is the rejection of that synthesis in preference for the ideology that these disciplines operate in fundamentally different domains such that they needn’t talk to each other.

The compilation by Boudry and Pugliucci doesn’t contain any contribution arguing for such a divide, though such blank-slate and postmodernist ideologies have traction in too wide a swathe of academia. While an attempt at such a essay might have been an interesting addition, the thesis doesn’t seem to me remotely tenable, and neither of the editors have any sympathy with postmodernism.

Forthcoming Installment: the supposed divide between science and philosophy.

Just as Ireland votes to repeal its blasphemy laws, the European Court of Human Rights has ruled in favour of Austrian blasphemy laws. They’ve upheld the conviction of a woman who was fined for calling Muhammed a “paedophile”, a reference to his marriage to Aisha, which according to mainstream Islamic tradition occured when she was six, and was consummated when she was nine.

I presume that the underlying logic goes like this. In keeping with trendy modern thought, they analyse everything in terms of power structures. Muslims in Austria are mostly a relatively recent immigrant community and are non-White, therefore they are “oppressed”. The convicted woman is a member of the Austrian “Freedom Party”, who are opposed to immigration, are regarded as “far right”, and are mostly White. Therefore they are the “oppressors”. And it’s the job of a Human Rights court to support the oppressed against the oppressors, so that’s how they ruled.

The convoluted excuse they came up with is that Muhammed continued to be married to Aisha when she was an adult, and indeed had sexual relations with other women, and therefore was not “primarily” attracted to under-age girls, and therefore the term “paedophile” is an unjustified insult. (Never mind that the vast majority of people who rightly get called “paedophiles” also have sex with adults.)

But it’s worth reading the Court’s analysis of Article 9 (freedom of religion) and Article 10 (freedom of expression) of the European Convention, because their interpretation totally guts Article 10 and means that Europeans now have little free-speech protection, at least when the topic is religion.

Article 9 declares:

Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief and freedom, either alone or in community with others and in public or private, to manifest his religion or belief, in worship, teaching, practice and observance.

Those who choose to exercise the freedom to manifest their religion under Article 9 of the Convention, irrespective of whether they do so as members of a religious majority or a minority, therefore cannot expect to be exempt from criticism. They must tolerate and accept the denial by others of their religious beliefs and even the propagation by others of doctrines hostile to their faith.

But then proceeds:

As paragraph 2 of Article 10 recognises, however, the exercise of the freedom of expression carries with it duties and responsibilities. Amongst them, in the context of religious beliefs, is the general requirement to ensure the peaceful enjoyment of the rights guaranteed under Article 9 to the holders of such beliefs including a duty to avoid as far as possible an expression that is, in regard to objects of veneration, gratuitously offensive to others and profane.

This is important. The ECHR has segued from respect for people’s right to hold and practise their religion, to respect for the religion itself. It is saying that people cannot be properly free to hold and practise their religion unless everyone else has a duty to “avoid as far as possible” saying anything “gratuitously offensive” about that religion.

That doctrine is not in Article 9 and it is not in Article 10. It appears to have been created by the ECHR through case law. To justify it they cite previous rulings, but do not point to where the “right” is enunciated in the European Convention itself.

Indeed, the main citation given is to Sekmadienis Ltd. v. Lithuania. But there the court rejected the limitation on free expression, saying that a clothing company was entitled to advertise clothing with slogans such as: “Jesus, what trousers!”, and “Dear Mary, what a dress!”. They declared that a prohibition on this was “not necessary in a democratic society”! That ruling does indeed refer to “the protection of public morals and the rights of religious people”, but it does not enunciate nor defend any right to have ones religion not be insulted.

And might one suspect that the real difference between the two rulings is that the Lithuanian one concerned the Christian religion?

The court continues:

Where such expressions go beyond the limits of a critical denial of other people’s religious beliefs and are likely to incite religious intolerance, for example in the event of an improper or even abusive attack on an object of religious veneration, a State may legitimately consider them to be incompatible with respect for the freedom of thought, conscience and religion …

But this is simply wrong. Respect for someone’s “freedom of thought, conscience and religion” is not the same as respect for that person’s opinions, it’s only respect for their right to hold and express them!

That difference is crucial, indeed it is the whole basis for modern democracy. Insulting someone’s views is not “intolerant”; trying to prevent them expressing those views would be intolerance, but publicly denigrating those opinions is not!

We can be as insulting as we like about government policies or about the policies of political parties that we are opposed to. That is not intolerance. And newspapers routinely run political cartoons that could be construed as insulting. Indeed, so long as we accept the right of a party to broadcast its views, campaign in an election, and the right of people to vote for that party, then we are not being intolerant. Tolerating something is not the same as respecting it!

Having — wrongly — interpreted Article 9 as including a right to have ones religion not be insulted, the ECHR then declares a tension between Article 9 and the free expression of Article 10.

The issue before the Court therefore involves weighing up the conflicting interests of the exercise of two fundamental freedoms, namely the right of the applicant to impart to the public her views on religious doctrine on the one hand, and the right of others to respect for their freedom of thought, conscience and religion on the other.

But, again, there is only tension here if ones right to “freedom of religion” includes the right to have ones religion respected, not just tolerated. It doesn’t.

Article 10 declares that freedom of expression:

… may be subject to such … restrictions or penalties as … are necessary in a democratic society … for the protection of the … rights of others …

And since, according to the ECHR, Article 9 entails a right to have ones religion not be insulted, it is then (they think) “necessary in a democratic society” to prevent people “gratuitously insulting” a religion. And such restrictions are then not a violation of Article 10.

The ECHR thus upholds the Austrian conviction because it:

… carefully balanced her right to freedom of expression with the rights of others to have their religious feelings protected …

This guts the free-speech provisions of Article 10. If people have a right to “have their religious feelings protected”, then all someone need do is declare that someone else’s speech hurts their religious feelings, and suddenly there is no right to say it. There is only the much weaker idea of a “careful balance” between “freedom of expression” and people’s “right to have their religious feelings protected”. In other words, we all have to be very careful what we say when we criticise religion.

And that’s not all, by the logic of the ruling, the “right to have ones religious feelings protected” has now been declared to be a part of Article 9 and restrictions on free expression in order to ensure that are now “necessary in a democratic society”. So this ruling extends out of Austria to the rest of Europe.

Philosophers Maarten Boudry and Massimo Pigliucci have recently edited a volume of essays on the theme of scientism. The contributions to Science Unlimited? The Challenges of Scientism range from sympathetic to scientism to highly critical.

I’m aiming to write a series of blog posts reviewing the book, organised by major themes, though knowing me the “reviewing” task is likely to play second fiddle to arguing in favour of scientism.

Of course the term “scientism” was invented as a pejorative and so has been used with a range of meanings, many of them strawmen, but from the chapters of the book emerges a fairly coherent account of a “scientism” that many would adopt and defend.

This brand of scientism is a thesis about epistemology, asserting that the ways by which we find things out form a coherent and unified whole, and rejecting the idea that knowledge is divided into distinct domains, each with a different “way of knowing”. The best knowledge and understanding is produced by combining and synthesizing different approaches and disciplines, asserting that they must mesh seamlessly.

A non-scientistic approach might reject this unified view. It might, for example, see sociology as divorced from biology. It might assert that culture is sufficiently independent of underlying biology that the biological sciences are irrelevant and can be ignored when dealing with sociology or politics or economics, which instead are independent and self-contained disciplines, complete in themselves. I would argue that this view is, at best, a needlessly self-limiting handicap, and at worst makes such disciplines prone to error and ideological fads.

A more fundamental rejection of scientism might see knowledge as having multiple and distinct sources. For example, one might argue that one domain of knowledge (“science”) arises from empirical evidence, whereas another, quite separate domain could arise from a priori reasoning. One could then assert that knowledge within one domain cannot be arrived at from another domain, and may not even be valid within other domains. Some would argue that the domains of ethics and mathematics are examples (of which more in later installments of this review).

In their introduction to the book, Boudry and Pigliucci explain that the question of scientism is one of two demarcation problems. The first is how to distinguish science from pseudoscience. The second is whether and how to distinguish “scientific” knowledge from other types of valid knowledge.

In his chapter, Pigliucci summarises philosophers’ responses to the first demarcation problem. For a while it was thought that Popper’s ideas of falsification provided a straightforward and clear criterion: if ideas can be falsified they are “science”, if they cannot then they are pseudoscience.

But it was soon realised that it’s not that easy. If a prediction turns out wrong, then clearly some part of the overall model is wrong, but one usually has considerable latitude in choosing which parts of the model to update. One can therefore protect a particular idea from falsification by instead adjusting something else. For example, if galaxies are found not to be rotating as expected, one could conclude that Newton’s law of gravity is falsified (we are dealing here with weak-field gravity where relativistic effects are negligible, so Newton’s gravity should work), or one can instead invoke additional, unseen “dark matter”.

A second problem is that Popper’s criterion gives no guidance on the practicalities. A prediction of a solar eclipse in thirty years time, based on well-tested models, is surely “scientific”, but it cannot be directly tested within the next decade. How about an eclipse prediction for a million years hence, or one for a million years in the past when no-one was there to record it? How about a prediction in particle physics that to test would require an accelerator ten times more energetic than we can currently build?

There’s a third problem: Is Popper’s maxim descriptive or prescriptive? If the latter then by what authority? Physicists generally regard the development of string theory as scientific (which is not the same as regarding it as proven), yet it is not readily testable. Some philosophers, including Pigliucci, have therefore claimed that it is not science but is rather metaphysics. But by what authority? If one were asked to justify the falsification criterion, how would one do it?

For the above reasons some philosophers have concluded that the task is hopeless. Pigliucci points to Larry Laudan as arguing that “demarcation projects are a waste of time for philosophers, since — among other reasons — it is unlikely to the highest degree that anyone will ever be able to come up with small sets of necessary and jointly sufficient conditions to define science, pseudoscience, and the like”.

Pigliucci himself regards this as too pessimistic, and instead argues for an account of science based on Wittgensteinian “family resemblance” concepts. There might not be neat criteria, but there are enough diagnostic characteristics that, in practice, it is possible to tell one from the other.

Personally I would argue that there is indeed one straightforward criterion distinguishing science from pseudoscience. It was stated by Feynman in his 1974 commencement address Cargo Cult Science, an essay still worth reading, for example for its prescience about the replication crisis in some areas of science.

For someone who was rather dismissive of academic philosophy, Feynman was actually pretty insightful about the nature and philosophy of science. He summed up science saying:

The first principle is that you must not fool yourself — and you are the easiest person to fool.

That’s it. Pseudoscience is when you treat adherence to an ideology or belief as more important than the evidence for it. Science is when you’re genuinely trying to adjust your beliefs to the evidence. Humans are hugely prone to cognitive biases, so can readily slip into pseudo-scientific thinking. Many of the methods developed by science — for example, randomised, double-blind trials — are attempts to minimise human cognitive bias.

By this definition, possibilities of ghosts, psychic powers, the supernatural and such are not ruled out by fiat, they are not “pseudoscience” because of the claims being made, they are pseudoscience because the evidence for the claims is grossly insufficient.

Feynman’s criterion also explains why Popper’s falsifiability is insightful. If one is genuinely trying to refute ones ideas, by making predictions and then testing them, then one is least prone to ideological bias. Pseudoscientists, such as homeopaths, astrologers and conspiracy theorists, look only for evidence that will confirm their beliefs, and scheme up excuses for why they cannot or should not look for refutations (an anti-scientistic appeal to “other ways of knowing” is a favourite).

But falsification is only part of the story. As above, sometimes one cannot test a prediction even if one would like to. That alone doesn’t make the enterprise pseudo-scientific; what matters is whether belief takes precedence over evidence. Thus, if a string theorist were to make dogmatic claims going well beyond the evidence then they’re not acting as a scientist. But a physicist who considers that string theory is a promising and worthwhile avenue to explore, while remaining critically aware of the difficulties of testing it, is indeed being entirely scientific.

Let’s accept this definition, though it’s important to note that no-one defending such a thesis would interpret “science” in a narrow sense, but would regard it broadly as including the gathering of empirical evidence and rational analysis and conceptualising about that evidence. Thus, “scientism” would not, for example, deny that historians can generate knowledge, it would instead claim that they are doing so using methods that are pretty much the same as those used also by scientists. The differences in approach then arise from the pragmatics of what sort of evidence is accessible, not from their being distinct and separate “ways of knowing”.

The philosophical case that Hall presents is based on the problem of induction. No amount of observing a regularity proves that it will still hold tomorrow. The supposition that it will requires a “uniformity of nature” thesis that the future will be like the past, and since we cannot obtain empirical evidence from the future, that thesis — it is claimed — cannot be proven by science.

Hall then argues that science finds this “Past–Future Thesis” indispensable, but declares:

… either the PFT can be justified on non-empirical grounds, or it cannot be justified at all. If we accept the first horn, then we are conceding that scientific observation is not the only source of knowledge, and thus that scientism is false.

Hall then declares that the PFT is indeed true, and says:

… since there is no empirical way of defending PFT, we are forced to conclude that defending the assumption — and ultimately defending science itself — must rest on a philosophical foundation rather than an empirical one. And, thus, it follows that the claim that science is the only source of knowledge is false.

He then, rather derisively, declares this to be basic stuff akin to “remedial pre-algebra”, and finishes with: “If popular science writers wish to defend scientism, they would do well to demonstrate a modicum of understanding of the best arguments against their position”.

So, according to Hall’s argument, science is not the only source of knowledge because: (1) we know that the PFT is true, and (2) we know that from philosophy rather than from science.

But strikingly absent from Hall’s article is any philosophical defence of PFT. If one wants to use this example to show that philosophy can produce knowledge where science cannot, one first has to show that philosophy proves the PFT true. Yet Hall does not do this.

So this refutation of scientism fails right there. Showing that science cannot answer a question is only halfway to a refutation of scientism, since one then needs to show that some “other way of knowing” can produce a reliable answer.

But can the use of induction be defended? Personally I think it can, though as a matter of probability and likelihood, not of rigorous proof. (But then it is accepted that science never produces absolute proof, but only provisional, most-likely models that are better than any known alternatives.)

Hall indeed considers this, suggesting that: “… if we look at the past, we see that the future resembles the past all the time, so there’s an overwhelming probabilistic case for the PFT”, but then objecting that: “in appealing to what’s happened in the past as a guide to what will happen in the future, the would-be defender is assuming the very thing in question”.

But, we can consider the set of all events, past and future. And we can consider picking from that set, and encountering a sequence of picking one thousand white balls in a row and then the next ball being black. Obviously, the likelihood of that happening will depend on the probability distribution governing picking from the set, and — ex hypothesi — we don’t know that, since we don’t know about future events. But, that sequence will have some probability, and so we can consider the ensemble of all possible probability distributions.

If there are long periods of stasis of unknown length, it is more probable that one is somewhere within the period of stasis rather than exactly at its end. That follows simply because there is only one “slot” at the end of the sequence but lots of slots that are not at the end. Given a long sequence of normality, and picking our location on that sequence at random, it is more likely that we will be somewhere boring in the midst of the sequence, rather than at the highly particular “last day of normality” right at its end. In essence, we’re not using the past as a guide to the future, we’re using it as a guide to the present time, and asking whether it is unusual.

This analysis requires as to conceptualise a birds-eye overview of the timeline, but it doesn’t require any assumption about the future and it doesn’t require knowing the probability distribution of future events.

Of course it is no guarantee, and for all we know the probabilities could be such that normality is coming to an imminent end. But, the sub-set of probability distributions that make it likely that, after having picked a thousand white balls in a row, the next is a black, is much smaller than the set of all possible probability distributions. Only a very special and particular probability distribution could make it more likely that we are exactly at the end of such a sequence, rather than anywhere else along it. And, given that we don’t know the probability distribution, that is unlikely. So it is more likely than not that a sequence of stasis will continue with the next pick.

Again, this argument does not depend on assuming a uniform probability distribution, it only depends on their being a probability distribution, and on considering the super-set of all possible such probability distributions.

This line of reasoning has been proposed by Ray Solomonoff, who formalised and developed it into his “Formal Theory of Inductive Inference”. I’m not aware of any refutation of the argument and so I currently regard it as a sufficient resolution of the problem of induction. (Though part of the point of writing a blog piece about it is that, if it’s wrong, someone might tell me why!)

As regards scientism, a last question arises as to whether the above argument counts as “science” or as “philosophy”. It is certainly a rational analysis involving mathematical reasoning. It is not a rebuttal that can be observed empirically with a pair of binoculars or a microscope. But then no sensible account limits science to what can be directly observed. That’s only the half of it. Science is just as much about the concepts and rational analysis that make sense of the empirical world. Thus the above rebuttal is squarely within the domain of science, and so the attempt to defeat scientism fails.

Theos Think Tank have been asking people whether they regard religions as violent. By their own admission, they didn’t entirely like the results.

Nearly half (47%) agreed that “the world would be a more peaceful place if no one was religious”. Fully 70% said that: “Most of the wars in world history have been caused by religions”.

Faced with that, Theos’s Nick Spencer took some comfort from the fact that “only 32% agreed that religions were inherently violent”. Only? So one-in-three British people thinks that religions are inherently violent and this merits an “only”?

Can one imagine people saying that Cancer Research UK or the Battersea Dogs Home are inherently violent? I mention two charities because “promotion of religion” still attracts charitable status in the UK along with tax exemptions. That should surely change given that half the nation now thinks the world would be more peaceful without religion.

I would concur with those saying that the Abrahamic religions, at least, have an inherent tendency to violence. That’s because they think that morality flows from the God and that moral conduct consists of believing in God and doing what God wants. From there, it’s a rather small step to thinking that anyone not of the right religious opinion is necessarily immoral for rejecting that religion’s beliefs and dictats. Hence one has a moral licence — or even a moral duty — to correct their errors, using force if sadly necessary.

The belief that obedience to God is morally paramount, even if it means killing someone, goes back of course to stories about Abraham himself. The Eid al-Adha festival is a public holiday celebrated throughout the Islamic world, honouring the willingness of Abraham to kill Isaac for no better reason than that God wanted him to.

Liberal Christians tend to squirm on this topic, changing the subject by saying that the important part of the story is that God rescinded the instruction. But do they go further and admit that Abraham’s intention to kill his own son was a moral failing on his part, and that a righteous man would have flat-out refused? No, they still laud Abraham’s obedience, and they even take that line in story books given to their kids (honest, they do!).

The story likely has no historical basis, but even so, if such stories are told as spiritual lessons, shouldn’t the supposedly peaceful Abrahamic religions now repudiate it? Until the mainstream religious opinion is moral condemnation of Abraham’s obedience, I submit that religions do indeed have an inherent tendency to violence.

Nick Spencer disagrees, saying that “You have to be pretty bone headed to believe — really and truly believe — that the great religions of the world preach violence and hatred. Go into any religious place of worship any day of the week and I would say the chances of hearing a kill the infidel sermon are vanishing small”.

So Imams in Pakistan do not preach in favour of their blasphemy laws, saying that blasphemers and apostates should be killed? No Imam has ever called for any punishment of Salman Rushdie? Friday prayers in Iran have never voiced hatred of the Great Satan, and mosques throughout the Islamic world never express any animosity towards Israel and the Jews?

Many Islamic countries prescribe the death penalty for apostasy or blasphemy. Several dozen people have been killed in Pakistan for mere accusations of blasphemy. In Bangladesh, multiple secularist bloggers have been killed by Islamists merely for criticising Islam. Other Islamic countries imprison, flog and outlaw secularists for speaking up.

This is not violence by lone rogues, but violence widely supported by mainstream Islamic opinion. Even voicing opposition to Pakistan’s blasphemy laws is itself considered blasphemous and so can get you killed, with wide swathes of that nation’s people openly supporting the murderer.

Nick Spencer’s claim reflects the gentle and anodyne theology of today’s Church of England, but Western Christianity has long been neutered by the Enlightenment and by secular values of church–state separation, individual rights, and religious liberty. This is tamed religion. But religion in the raw prevailed through much of the history of Christendom, and still blights the Islamic world.

Spencer continues: “Referencing the Crusades or the Inquisition is pretty poor work. Atheist regimes were more efficient and rather more recent in their genocidal efforts.”.

And yet the Crusades and the Inquisition were not isolated aberrations, they were manifestations of how the Christian churches were for much of their history. And on the recent genocidal efforts, Third Reich Germany must take the prize, and yet was thoroughly religious and theistic. The fact that it was a nation that was 94% Christian that murdered millions of Jews with genocidal intent is something that Christians still don’t want to admit.

Spencer can rightly point to the large-scale atrocities of the communist regimes of Stalin, Mao and Pol Pot (motivated, by the way, by totalitarian communist ideology, not by the irrelevant point that they were “atheistic”), but is that what today’s religious apologists are reduced to? “Yes, half the nation thinks the world would be more peaceful without us, but “only” a third think we’re inherently violent and … well, we’re not quite as bad as the totalitarian communist regimes”. As exculpation goes, that’s feeble!

Progress means accepting that we must not impose our ideology by violence even if doing so would be justified or even demanded by our God, our religion or our ideology. Because, judging by our religious history, we sure as hell cannot rely on Gods to be peaceful!

That Enlightenment principle is now widely accepted in the West, but can we hope that it will become accepted by those for whom the whole ethos of Islam, and indeed the very name itself, means “submission” to the will of God?

“Is there anyone (other than slave holders and Nazis) who would argue that slavery and the Holocaust are not really wrong, absolutely wrong, objectively wrong, naturally wrong?”

Yes, I would (and I don’t think I’m either a slave holder or a Nazi). That quote ends Michael Shermer’s recent defence of moral realism on his Skeptic blog.

My disagreement with Shermer comes down to what we even mean by morality being “objective” rather than “subjective”. Indeed this particular disagreement can account for a lot of people talking past each other. Shermer explains: Continue reading →