Archive for June, 2013

It’s always useful to know a statistics junkie or two. Brendon is our resident Bayesian. Another colleague of mine from Zurich, Ewan Cameron, has recently started Another Astrostatistics Blog. It’s well worth a look.

I’m not a statistics expert, but I’ve had this rant in mind for a while. I’m currently at the “Feeding, Feedback, and Fireworks” conference on Hamilton Island (thanks Astropixie!). There has been some discussion of the problem of reification. In particular, Ray Norris warned that, once a phenomenon is named, we have put it in a box and it is difficult to think outside that box. For example, what was discovered in 1998 was the acceleration of the expansion of the universe. We often call it the discovery of dark energy, but this is perhaps a premature leap from observation to explanation – the acceleration could be being caused by something other than some exotic new form of matter.

There is a broader message here, which I’ll motivate with this very interesting passage from Alfred North Whitehead’s book “Science and the Modern World” (1925):

In a sense, Plato and Pythagoras stand nearer to modern physical science than does Aristotle. The former two were mathematicians, whereas Aristotle was the son of a doctor, though of course he was not thereby ignorant of mathematics. The practical counsel to be derived from Pythagoras is to measure, and thus to express quality in terms of numerically determined quantity. But the biological sciences, then and till our own time, has been overwhelmingly classificatory. Accordingly, Aristotle by his Logic throws the emphasis on classification. The popularity of Aristotelian Logic retarded the advance of physical science throughout the Middle Ages. If only the schoolmen had measured instead of classifying, how much they might have learnt!

… Classification is necessary. But unless you can progress from classification to mathematics, your reasoning will not take you very far.

A similar idea is championed by the biologist and palaeontologist Stephen Jay Gould in the essay “Why We Should Not Name Human Races – A Biological View”, which can be found in his book “Ever Since Darwin” (highly recommended). Gould first makes the point that “species” is a good classification in the animal kingdom. It represents a clear division in nature: same species = able to breed fertile offspring. However, the temptation to further divide into subspecies – or races, when the species is humans – should be resisted, since it involves classification where we should be measuring. Species have a (mostly) continuous geographic variability, and so Gould asks:

Shall we artificially partition such a dynamic and continuous pattern into distinct units with formal names? Would it not be better to map this variation objectively without imposing upon it the subjective criteria for formal subdivision that any taxonomist must use in naming subspecies?

Gould gives the example of the English sparrow, introduced to North America in the 1850s. The plot below shows the distribution of the size of male sparrows – dark regions show larger sparrows. Gould notes:

The strong relationship between large size and cold winter climates is obvious. But would we have seen it so clearly if variation had been expressed instead by a set of formal Latin names artificially dividing the continuum?

Beginning with Hugh Ross, I undertook to critique various articles on the fine-tuning of the universe for intelligent life that I deemed to be woeful, or at least in need of correction. A list of previous critiques can be found here. I generally looked for published work, as correcting every blog post, forum or YouTube comment is a sure road to insanity. I was looking to maximise prestige of publication, “magic bullet” aspirations and wrongness about fine-tuning. I may have a new record holder.

It’s an article published in the prestigious British Journal for the Philosophy of Science by a professor of philosophy who has written books like “Introduction to the Philosophy of Science”. It claims to expose the “philosophical naivete and mathematical sloppiness on the part of the astrophysicists who are smitten with [fine-tuning]”. The numbers, we are told, have been “doctored” by a practice that is “shrewdly self-advantageous to the point of being seriously misleading” in support of a “slickly-packaged argument” with an “ulterior theological agenda”. The situation is serious, as [cue dramatic music] … “the fudging is insidious”. (Take a moment to imagine the Emperor from Star Wars saying that phrase. I’ll wait.)

It will be my task this post to demonstrate that the article “The Revenge of Pythagoras: How a Mathematical Sharp Practice Undermines the Contemporary Design Argument in Astrophysical Cosmology” (hereafter TROP, available here) by Robert Klee does not understand the first thing about the fine-tuning of the universe for intelligent life – its definition. Once a simple distinction is made regarding the role that Order of Magnitude (OoM) calculations play in fine-tuning arguments, the article will be seen to be utterly irrelevant to the topic it claims to address.

Note well: Klee’s ultimate target is the design argument for the existence of God. In critiquing Klee, I am not attempting to defend that argument. I’m interested in the science, and Klee gets the science wrong.

Warning Signs

Klee, a philosopher with one refereed publication related to physics (the one in question), is about to accuse the following physicists of a rather basic mathematical error: Arthur Eddington, Paul Dirac, Hermann Weyl, Robert Dicke, Brandon Carter, Hermann Bondi, Bernard Carr, Martin Rees, Paul Davies, John Barrow, Frank Tipler1, Alan Lightman, William H. Press and Fred Hoyle. Even John Wheeler doesn’t escape Klee’s critical eye. That is quite a roll call. Eddington, Dirac, Weyl, Bondi, Rees, Hoyle and Wheeler are amongst the greatest scientists of the 20th century. The rest have had distinguished careers in their respective fields. They are not all astrophysicists, incidentally.

That fact should put us on edge when reading Klee’s article. He may, of course, be correct. But he is a philosopher up against something of a physicist dream team.

Klee’s Claim

The main claim of TROP is that fine-tuning is “infected with a mathematically sharp practice: the concepts of two numbers being of the same order of magnitude, and of being within an order of each other, have been stretched from their proper meanings so as to doctor the numbers”. The centrepiece of TROP is an examination of the calculations of Carr and Rees (1979, hereafter CR79) – “[this] is a foundational document in the area, and if the sharp practice infests this paper, then we have uncovered it right where it could be expected to have the most harmful influence”.

CR79 derives OoM equations for the levels of physical structure in the universe, from the Planck scale to nuclei to atoms to humans to planets to stars to galaxies to the whole universe. They claim that just a few physical constants determine all of these scales, to within an order of magnitude. Table 1 of TROP shows a comparison of CR79’s calculations to the “Actual Value”.

Klee notes that only 8 of the 14 cases fall within a factor of 10. Hence “42.8%” of these cases are “more than 1 order magnitude off from exact precision”. The mean of all the accuracies is “19.23328, over 1 order of magnitude to the high side”. Klee concludes that “[t]hese statistical facts reveal the exaggerated nature of the claim that the formulae Carr and Rees devise determine ‘to an order of magnitude’ the mass and length scales of every kind of stable material system in the universe”. Further examples are gleaned from Paul Davies’ 1982 book “The Accidental Universe”, and his “rudimentary” attempt to justify “the sharp practice” as useful approximations is dismissed as ignoring the fact that these numbers are still “off from exact precision – exact fine tuning”.

And there it is …

I’ll catalogue some of Klee’s mathematical, physical and astrophysical blunders in a later section, but first let me make good on my promise from the introduction – to demonstrate that this paper doesn’t understand the definition of fine-tuning. The misunderstanding is found throughout the paper, but is most clearly seen in the passage I quoted above:

[Davies’] attempted justification [of an order of magnitude calculation] fails. 10^2 is still a factor of 100 off from exact precision – exact fine-tuning – no matter how small a fraction of some other number it may be [emphasis added].

Klee thinks that fine-tuning refers to the precision of these OoM calculations: “exact precision” = “exact fine-tuning”. Klee thinks that, by pointing about that these OoM approximations are not exact and sometimes off by more than a factor of 10, he has shown that the universe is not as fine-tuned as those “astrophysicists” claim.

Well worth three minutes of your time is this video on sonic resonances in a 2D square board.

As the sound wobbles the board, standing waves are set up. Because these waves are 2 dimensional, the resulting pattern is more intricate than for standing waves in a 1 dimensional string.

The red dots are places on the string that do not move – called the nodes. For a 2D membrane, like the one above, these nodes will be lines, and salt sprinkled on the board will naturally follow these lines, since and grains not on the lines won’t sit still.

As well as being rather pretty, the video shows why drums are rhythmic instruments, rather than melodic (you wouldn’t ask the drummer to drum out the melody, and drummers don’t have to worry about key changes). When you pick a guitar string, you get a note determined by the length of the string (and its tension and line density). You also get, layered on top of that note, overtones. Because the string is essentially one dimensional, these overtones are related to the fundamental tone by simple fractions. Thus, the fundamental and the overtones all sound good together – the overtones harmonize with the fundamental. (I’ve written in more detail about the musical scale here.) A skilful (bass)-guitarist can use his finger at a node to excite only these overtones, creating the so-called harmonics. Jaco Pastorius‘ “Portrait of Tracy” is the classic example, and the technique has been expanded by Victor Wooten and others.

For the skin of a drum, however, there is no nice, neat relationship between the fundamental tone and the overtones. This is shown in the complexity of the patterns in the video above. The result is that there is no one pure “note” that a particular drum makes, but rather a somewhat atonal mixture of notes. Tuning a drum generally involves trying to eliminate the overtones, with the final result being a strong function of a drummer’s personal preferences about what sort of tone s/he wants.

(I have a half-written post titled “Drummers, Metronomes and the Tyranny of the Beat”, but I’ll save that for another day.)

A great post by Ted Bunn on the difference between Bayesian and frequentist approaches to probability, summarised in this marvellous plot:

Highlight: “Frequentism simply refuses to answer questions about the probability of hypotheses. … In frequentist analyses, all probabilities are of the form P(data | hypothesis), pronounced “the probability that the data would occur, given the hypothesis.” Frequentism flatly refuses to consider probabilities with the hypothesis to the left of the bar — that is, it refuses to consider the probability that any scientific hypothesis is correct.”