Kelvin, Rutherford, and the Age of the Earth: I, The Myth

Kelvin calculated that the Earth was probably around 24 million years old, from how fast it is cooling. Rutherford believed that Kelvin’s calculation was wrong because of the heat generated by radioactivity. Kelvin was wrong, but so was Rutherford. The Earth is indeed many times older than Kelvin had calculated, but for completely different reasons, and the heat generated by radioactive decay has nothing to do with it.

Disclosure: in my introduction to the Scientific American Classic, Determining the Age of the Earth, and elsewhere, I have like many other authors repeated Rutherford’s argument with approval, without paying attention to Rutherford’s own warning that qualitative is but poor quantitative, and without bothering to check whether the amount of heat generated by radioactivity is enough to do the job. He thought it was but we now know it isn’t. It was only when chatting online (about one of the few claims in the creationist literature that is even worth discussing) that I discovered the error of my ways.

On the face of it, things could not be plainer. Kelvin had calculated the age of the Earth from how fast heat was flowing through its surface layers. An initially red hot body would have started losing heat very quickly, but over geological time the process would have slowed, as a relatively cool outer crust formed. His latest and most confident answer, reached in 1897 after more than 50 years of study, was in the range of around 24 million years.[1]

Yet on May 20, 1904, there was Rutherford, at the lectern of the Royal institution, talking about a piece of Cambrian rock, and announcing, on the basis of how much of its uranium had decayed to give lead and helium, that its age was some 500 million years. We even have Rutherford’s much quoted account of what happened next:

I came into the room which was half-dark and presently spotted Lord Kelvin in the audience, and realised that I was in for trouble at the last part of my speech dealing with the age of the Earth, where my views conflicted with his. To my relief, Kelvin fell fast asleep, but as I came to the important point, I saw the old bird sit up, open an eye and cock a baleful glance at me.

Then a sudden inspiration came, and I said Lord Kelvin had limited the age of the Earth, provided no new source [of heat] was discovered. That prophetic utterance referred to what we are now considering tonight, radium! Behold! The old boy beamed upon me.

This all seems clear enough. Rutherford is referring to Kelvin’s cooling argument. But this argument is invalid, because it assumes no new source of heat, and such a source exists, namely radioactivity.

The process that was overlooked in Kelvin’s calculations was also, indirectly, responsible for producing these folds.

Or so says the popular myth. The truth is more complex, and more interesting. For a start, Kelvin’s “prophetic utterance” did not refer to the Earth at all, but to a separate calculation of the age of the Sun. We know how brightly the Sun shines, and hence how rapidly it emits energy. If we knew how much energy it had to start with, and assumed that it wasn’t being added to, we could simply divide the initial amount by the rate of depletion, to estimate how long it would be able to shine. Kelvin performed such a calculation many times. As source of energy, he invoked the most intense source known to him, namely the gravitational energy released when the Sun collapsed from a diffuse cloud of gas to its present size. This led him to conclude in 1862 that the age of the Sun was in the range of 10 million to 100 million years (subsequently refined to around 20 million), and that “inhabitants of the earth can not continue to enjoy the light and heat essential to their life for many million years longer unless sources now unknown to us are prepared in the great storehouse of creation [emphasis added].” These are the prophetic words that Rutherford was referring to.

If Rutherford thought that the energy of radioactive decay was fuelling the Sun, he was greatly mistaken. The philosopher Auguste Comte had written in 1835 that we would never know the internal composition of the heavenly bodies.[2] He was wrong. Pass electricity through a gas or vapour, and it will emit light at specific frequencies that depend on the elements present (one familiar example is the sodium yellow of street lights). There are dark lines in the solar spectrum, and by 1860 the German chemist Kirchoff had shown that their frequencies match these characteristic emission lines.[3] So the chemical composition of the Sun’s outer layers was already well-known, and the fractional abundances of the heaviest elements, including almost all those that exhibit radioactivity, are quite negligible. And we now know, as Rutherford could not, that radioactive decay does not generate enough energy. Even if abundant supplies of the radioactive elements were concealed within the Sun’s interior, they would not suffice to fuel the Sun for Rutherford’s 500 million years, let alone the 4,500 million years, with as much still to come, required by current estimates.[4] It was not until 1920 that the source of the Sun’s energy was correctly identified as the fusion of hydrogen to helium, and while this was soon generally accepted, quantitative confirmation by measurements on the neutrinos produced had to wait until 2001. Using Einstein’s famous mass/energy equation and the masses of the isotopes involved, it is easy for us to calculate that the fusion of hydrogen to helium is some thirty times more productive of energy than the decay of the same mass of uranium to helium and lead; but Rutherford in 1904 could not have known of the relationship between mass and energy, or the precise masses of the relevant isotopes, or even that such things as isotopes existed.

But what about the age of the Earth itself, and Kelvin’s cooling calculation? This is what I had for many years assumed that Rutherford was talking about, and it turns out that radioactive decay is no real help here either. Measurements on granite in the early years of the 20th century suggested that radioactivity could fully account for the amount of heat being radiated out to space, and that the Earth might even be heating up. But we now know that granite is not representative of the Earth as a whole. The total rate of heat production by radioactive decay is currently estimated at around half the amount that the Earth emits to space, so simplemindedly we might imagine that this extends Kelvin’s calculation by a factor of two. Maybe a bit more, since by their nature radioactive materials would have been more abundant in the remote past, but this will not make much difference over the few tens or even hundreds of millions of years then under discussion. And even this grossly exaggerates the potential significance of radioactive heating, since all we need to consider is the heat generated in the outermost layers, from which heat has had time to diffuse the surface.

So how could Kelvin’s cooling argument be refuted? The correct argument had been put forward a decade earlier, before radioactivity had even been discovered, by John Perry, one of Kelvin’s own former pupils, and Kelvin had partly accepted the principle of Perry’s reasoning.

To understand what is really happening, we need to consider the different ways in which heat can be transferred. You may remember from school that there are three processes available; radiation, conduction, and convection. Radiation is the process by which the Sun, or the filament of an incandescent light bulb, glows yellow hot; or at lower temperatures the embers of a fire or the coals of a barbecue glow red hot; or, at yet lower temperatures, the Earth loses energy to the coldness of outer space by glowing in the infrared. It is not really relevant to the transmission of energy through opaque material such as rock. Conduction is simply the diffusion of heat through material, as the faster moving atoms of the hotter region jostle against, and share their energy with, their cooler neighbours. The third, and most efficient, heat transfer mechanism is convection. This is the physical movement of hotter material, carrying its heat with it, as in the roiling that takes place in the water when you boil an egg on a stove, or the pattern that forms in the film of oil in the pan if you prefer your eggs fried. Hotter material expands, making it less dense, so it rises to the surface, bringing cold material closer to the heat source.

Radiation is only relevant when we are talking about the transfer of heat through empty space, or through some transparent medium. Diffusion is simply the statistical spreading out of the extra heat in the hotter material, and is an inefficient process over long distances. By far the most efficient heat transfer mechanism is convection, but this can only take place in a fluid, where hotter and colder material can physically change places.

Back to Kelvin’s cooling rate calculation. This depended, among other things, on assuming heat transfer by conduction, and the rate of conduction was determined by actual measurements on rocks. Now imagine what would happen to Kelvin’s calculation if the actual heat transfer process were more efficient than this. The effect is the opposite of what you would at first imagine. Commonsense suggests that more rapid heat transfer would imply more rapid cooling. Not so. If heat transfer is limited, only a relatively shallow layer near the surface will have had time to contribute. If heat transfer turns out to be more efficient, the cooled layer will be correspondingly thicker, heat will have been conveyed from greater depths, and the total amount of heat conducted through the surface and lost to space will be correspondingly greater. But we know the total rate at which heat is being transferred, from the conductivity experiments and the rate at which temperature increases when we go down a mine, and this acts as a constraint on the calculation. Fixed rate, but a greater total amount because of more efficient heat transfer, implies a longer time. The cooling calculation can therefore be brought into line with Rutherford’s results, and indeed with the even longer times that we now know to be involved, if heat at depth is sufficiently more mobile than Kelvin had imagined.

In 1894, Kelvin’s former pupil and protégé, John Perry, had suggested higher heat transfer as a way of reconciling Kelvin’s age estimates with the hundred million years or so then required by the geologists. Kelvin, rather grudgingly, agreed in principle, and undertook to examine whether the thermal conductivity of rocks did increase as required at high temperature. [5] Within a few months, Kelvin reported a colleague’s response to this question; they did not. Indeed, Kelvin took the opportunity to review the entire question in the most extreme possible light, triumphantly lowering his best estimate of the age of the Earth to around 24 million years, noting that this was in good record with his estimates for the age of the Sun, and claiming that the burden of proof was now back with the geologists. Perry, in reply, drew attention to the fact that Kelvin had totally ignored the possibility that the Earth’s interior was or had been fluid enough to support convection, but Kelvin seems to have passed over this suggestion in silence.

A pity. Convection in the mantle, as we now call the region between the solid crust and Earth’s metallic core, is a cornerstone concept of modern geology. The implications of this, together with an explanation of why Perry waited until 1894 to challenge Kelvin’s calculations (which went back, as we have seen, to 1862 and earlier), and how I belatedly stumbled upon this story as a result of chatting online about the creationist literature, will be the subject of further posts.

[4] Some radioactive elements, such as the newly discovered radium that Rutherford was referring to, do generate heat quickly, but that is because of their rapid decay rate, which implies short half-lives and rules them out as candidates.

[5] Perry, Nature51, 224-227 (1895); Kelvin’s acknowledgement is at p. 227, his dismissive rebuttal at p. 438, and Perry’s final attempt at persuasion at p. 582.

Share this:

Like this:

LikeLoading...

Related

About Paul Braterman

Science writer, former chemistry professor; committee member British Centre for Science Education; board member and science adviser Scottish Secular Society; former member editorial board, Origins of Life, and associate, NASA Astrobiology Insitute; first popsci book, From Stars to Stalagmites 2012

Interesting reverse parallel is the rate of heat absorption of a frozen steak left on a kitchen table. The thermal conductivity of ice increases with decreasing temperature and at zero Centigrade, water has about one quarter the thermal conductivity of ice. For this reason the rate of absorption of heat by a frozen steak left at ambient temperatures, is significantly greater than that at higher , chill temperatures, allowing for Newton’s enhanced negative cooling rate and its dependence on the differential temperatures from ambient.