Tuesday, March 7, 2017

Richard Carrier : Are the Odds Against the Origin of Life Too Great to Accept?

Are the Odds Against the Origin of Life Too Great to Accept?

The following material examines arguments
officially and formally refuted in Richard Carrier, "The Argument from
Biogenesis: Probabilities against a Natural Origin of Life,"Biology & Philosophy19.5 (November, 2004), pp. 739-64. Note the clarification added to Addendum C
regarding the definition of self-replication. In addition, new and
important breakthroughs are continuing. For example, see: Graciela
Flores, "Volcano Gas, Amino Acids Make Peptides," The Scientist, 8 October 2004.

All too frequently we hear statistics being offered to "prove" that
the odds against the origin of life are so great that we must posit a
Creator to explain the event. David Foster, for example, whose
book I critique through the link at the bottom of this essay, uses the
odds of spontaneously assembling the genome of the T4 bacteriophage, and
also the human hemoglobin molecule, as proof of the impossibility of
life, even though no one thinks the T4 genome or hemoglobin has ever
been assembled at random (for more on these statistics, see Part 9
of that review). I have encountered many such references, and since
they are always obscure, and often antiquated, it is rarely possible to
know how they were derived and thus whether they have any merit. It is
helpful to have a summary analysis of all known examples, to be used to
check these claims whenever they are brought up in conversations,
debates, books, or articles. This essay is an attempt to fill that need
(another good place for information is Ian Musgrave's excellent page on this topic--on which, see Note).
Although I cover a wide range of sources, I am certain that I have not
found all of them. If you ever encounter a statistic being cited from a
source which is not discussed here, please let me know
and I will investigate and expand this essay accordingly. [in response
to creationist criticisms of what I am doing in this essay, I have
composed a more theoretical discussion of ten typical errors in creationist approaches to and uses of cosmology, biology, statistics, and logical argument.]

Frank Salisbury

One of the few serious, scientific attempts at such calculations is
to be found in the oft-cited and vastly out-of-date article "Natural
Selection and the Complexity of the Gene" by Frank Salisbury in Nature
(vol. 224, Oct. 25, 1969, pp. 342-3). The purpose of this article was
to identify a scientific problem and suggest possible avenues of
research toward a solution (something never seen in creationist
literature--creationists, unlike scientists, are never interested in
actually solving problems). The Salisbury article is a good one, because
it cites all previous scientific literature on this issue up to his own
time. I have not yet found an equivalent work summing the literature up
to any more recent date, but that does not mean none are out there. I
am eager to hear of any such references. Salisbury's basic assumptions,
and the problems with those assumptions, are clearly stated. First, he
calculates that the odds against life beginning in the known expanse and
age of the universe are 1 in 10^415, but, as he says himself, this is
only true "if only one DNA molecule [1000 nucleotides large] were
suitable" to get biology going. In other words, if many possible
molecules could be substituted, these odds change for the better, as
they also do if a smaller molecule could have gotten things started.
Salisbury himself notes that the odds are rather good that at least one
141-nucleotide (or smaller) replicator could have formed, given the age
and expanse of the universe as then understood. The discovery of the tetrahymena (see Addendum C) thus renders Salisbury's concerns moot: very small, simple replicators are now known to be possible.
Another article of Salisbury's has been cited, but this citation is
abused. In "Doubts About the Modern Synthetic Theory of Evolution" in American Biology Teacher
Sept. 1971, p. 336, Salisbury calculates the number of possible
arrangements of nucleotides in a "medium protein" 300 amino acids long,
arriving at 10^600. Salisbury did not argue that this proves the origin
of life too improbable to have happened by chance (he does not even use
this number to derive a statistic). For in fact, his calculation makes a
variety of assumptions which negate the use of this number for that
purpose: first, for the first life we want to examine the minimum
self-replicating protein, not the "medium" one; second, this only gives
us the number of different arrangements, and billions upon billions of
those arrangements could be viable self-replicators, not just one of
them; third, he presumes a four-nucleotide DNA code, even when there is
no reason why life had to be coded that way (there are other coding
systems known in nature, and scientists are inventing new life forms
based on others, cf. op. cit. n. 1a),
and alien life may exist which is coded with a different four
nucleotides, or more or less than four, and so on, so that the odds of
life forming cannot be derived from the expectation that ours is the
only possible molecular arrangement; and fourth, this is just the number
of arrangements of coding nucleotides in one gene, but for all we know
life began much simpler than this, and later developed a coding system
through symbiosis and natural selection. That last point is particularly
important, since all that is needed to get life going is anything that
replicates, and four-bit coded DNA is not the only feasible molecule
that might do that--a much simpler RNA code could have been the starting
point [1b].

Henry Quastler

Both Coppedge and Salisbury cite a certain Henry Quastler's work on this problem. This work can be found in The Emergence of Biological Organization
(1964). But he is quoted selectively, for his final conclusions
actually support the possibility of life. But in the process, he derives
some daunting figures that are quoted out of context. His approach is
unique: he calculates the information content of a DNA code. He first
argues that "a low estimate of the information content of a bacterium"
is "10^3 bits...[which] corresponds to...a single choice among 2^1000
possibilities." This means that "the probability of making such a choice
by accident is 10^-301" (p. 4). After estimating the available time,
materials, etc., "the probability of life having originated through
random choice...[is] about 10^-255" (p. 6). Of course, Quastler knows
very well that life did not begin with a bacterium, and so he does not
say that these are the odds against the origin of life, but simply
demonstrate that life must have begun simpler. That is, natural
selection can build up the information content of this complex bacterium
beginning with something smaller.
When he considers numerous other factors for a possible original
replicator, even the worst chance of life beginning naturally he finally
figures to be 10^-20, which is well within the realm of the possible
[see 1].
Quastler concludes that this "suggests that the probability of
obtaining a complete set of enzymes by coding proteins from 10^7
nucleotide pairs may be quite high" (p. 46). In other words, it is
naturally possible. After adding other factors and considering all
angles, he figures a final range of probability between 10^-6 and 10^-30
(p. 58). Quastler's work thus proves that the natural origin of life is
not too improbable at all.

Hubert Yockey

Another scholarly source is Hubert Yockey's article "Self Organization Origin of Life Scenarios and Information Theory" in Journal of Theoretical Biology
91 (1981) pp. 13-31 (this is an extension of work done by him in 1977
in vol. 67 of the same journal). The objective of his paper is not to
prove special creation (he actually rejects such theories as useless),
but to argue that alien life is so improbable that we ought to shift
science to draw talent and funding away from projects like SETI and into
"research on the origin of life." In his own abstract, he presents his
conclusion as "belief in little green men in outer space is purely
religious not scientific." But his assumptions are as faulty as those
made by creationists, although his approach is much more sophisticated.
But above all, he does not generate any actual estimates of probability.
Yockey tries to argue that only 10^5 arrangements of a protein 100
amino acids long, out of a total possible 1.26 x 10^130 arrangements,
are of concern to biology, if we assume a 4-bit code. Though he does not
state this explicitly, this means the odds against life starting, if it
had to start with just such a protein, would be 1 in 10^125. Though
this is not his argument, creationists have tried to spin it that way.
But this is invalid for two reasons: Yockey assumes exactly and only 20
kinds of amino acids are relevant, but life might be possible with any
combination of any number of the hundreds of kinds that can exist in
nature. The mere fact that life on our planet got settled on a certain
twenty does not entail that this is the only way it can be done [1a].
Yockey also assumes that exactly and only 100-amino-acid chains are
relevant, but life could have been begun by any number of possible
chains of many different lengths, and Yockey does not sum all the
relevant combinations of all the possible naturally-occurring chain
lengths which may be self-replicating--he only solves this for the
100-amino-acid chain. The mathematical error this produces is discussed
in the Biology & Philosophy article cited at top.
Yockey also generates another misquoted number. Assuming a
particular maximum number of suitable planets and amino-acids, the known
age of the universe, and a recombination rate of twice per day (on
average), he tells us that 1.61 x 10^60 different 100-amino-acid chains
will be produced. This in no way refers to the odds against life, since
Yockey does not try to calculate or predict how many of those
combinations would be viable self-replicators (certainly it would not be
only one), and all the same problems apply here as before.
Nevertheless, this number is cited as if it were a statistic by Bradley
and Thaxton in The Creation Hypothesis (discussed below)--indeed,
they even get it wrong, claiming the number to be 1 x 10^65 (they also
get the citation wrong, listing the date of Yockey's 1977 paper as 1981,
and printing his actual 1981 article not as vol. 91, but as 191).
Of course, even Yockey's other assumptions are questionable. He
argues for a 4-bit code. Yet he himself admits that replicating proteins
are known that function on a 3-bit code (p. 19), and he admits that,
after all is said and done, a replicating protein chain as large as
100,000 amino-acids long could be hit upon in the known age and expanse
of the universe, if we assume a 2-bit proto-gene (p. 22). He argues
against such a replicating system, however, but unconvincingly. His
argument is that such a small code would require longer chains to
accomplish the same results, but that is moot. All we need to get life
going is anything that replicates, no matter how inefficiently or
inaccurately, or how simply, since all the failures will be washed away,
no matter how many more there are, while the successes will remain and
continue to reproduce. Then natural selection can get to work. And it is
easy to imagine how a 2-bit replicator could merge with another through
a symbiotic relationship, giving rise to a 4-bit code like our present
DNA system. Yockey does not even consider this scenario.
Yockey later wrote a book, in which he repeated the same faulty arguments, entitled Information Theory and Molecular Biology
(1992). Besides the curious fact that he calls the Big Bang a "hydrogen
bomb explosion" which, unless he is being metaphorical, throws his
knowledge of science into doubt, he makes bold claims such as "the
belief that...any...protein could appear by chance is based on faith"
(257), yet this does not seem to be true (for the tetrahymena discovery
refutes such a claim, as do recent discoveries of replicating peptide
chains), and even if true, the contrary statement, "the belief that any
protein could not appear by chance is based on faith," would
still be just as true. He also claims that "perhaps 300 to 400 amino
acids" are required for the simplest replicator, although he admits that
it may be as few as 56, something few creationists are willing to
mention.
When it comes time to calculate an improbability (254-257), all
Yockey does is calculate the improbability of a single protein forming
by chance (cytochrome c), and his result is 2 x 10^-44, which is low but
not low enough to ensure impossibility, since anything less than 1 in
10^50 could have happened at least once in all of time and space, as
we've noted already (Borel).
But this calculation is moot, since we need to know the chance of any
viable replicating protein arising, not just one specific protein. There
is no reason to suppose that every possible biosphere needs cytochrome
c. Other biospheres will have protein catalogues completely alien from
ours, and just as rare. Hence everything cytochrome c does in our
biosphere will be accomplished by a completely different protein in
other biospheres, so calculating the improbability of cytochrome c is a
useless exercise. His approach is like proving that he is most unlikely
to win the lottery and therefore the lottery can never be won, when in
fact someone wins the lottery on a regular basis. What we want to know are the odds of some protein (or set of proteins) winning the lottery, not the odds of a specific protein doing so. Thus his number is moot.Even so, Yockey then moves
this number down to 2.3 x 10^-75 on the grounds that terrestrial
chirality (all-left-handed proteins) must happen by chance, although he
acknowledges that it may have arisen deterministically, as is very
likely, so this final number is even more irrelevant. For sources on
natural causes of chirality, see [2], and a good deal more is said about this [below].Update, November 2006: I have addressed Yockey's new book, Information Theory, Evolution, and the Origin of Life (2005), on my November Blog.
This book contains nothing significantly new: Yockey still generates no
actual statistic for the improbability of natural biogenesis (though he
generates two numbers that creationists might abuse as such), and
commits the same fallacies noted above for his previous work, all in
pursuit of the exact same agenda (the destruction of SETI).

Carl Sagan

Even Carl Sagan has been cited, from a book he edited, Communication with Extra-Terrestrial Intelligence
(MIT Press, 1973), a record of the proceedings of a conference on SETI.
Sagan himself presented a paper at that conference, in which he reports
(pp. 45-6) the odds against a specific human genome being
assembled by chance as 1 in 10^2,000,000,000 (in other words, the genome
of a specific person, and not just any human). As a build-up to this
irrelevant statistic he states that a simple protein "might consist" of
100 amino acids (for each of which there are 20 "biological varieties")
for a chance of random assembly, for one specific protein of this sort,
of 1 in 10^130. He uses these statistics as a rhetorical foil for the
fact that no human genome is assembled at random, nor did life have to
start with only one possible protein of a particular, specific type, but
that "the preferential replication, the preferential reproduction of
organisms, through the natural selection of small mutations, acts as a
kind of probability sieve, a probability selector," so that one must
account for natural selection in estimating the odds of any alien
species existing elsewhere in the universe, and not just calculate the
odds of random assembly like the examples he just gave. Nevertheless,
Sagan's words are used against him by Christians who grab at the numbers
without paying attention to their context, or indeed to the fact that
Sagan uses extremely simplified equations and assumptions.

Julian Huxley

Recently, a creationist cited yet another dismally misquoted
scientific statistic which really takes the cake for best example of an
abused reference. The source is Julian Huxley, who, it is
said--and he of all things a "bastion of the theory of
evolution"--determined that "the odds of the evolution of the horse were
1 in 1000 to the power of 1,000,000." One might immediately wonder how
someone who believed this could still be a defender of evolution--after
all, if those really were the odds against the evolution of the
horse, who would buy evolution as a sensible explanation? Our
doubt-sensors are right to ring loudly on this one: for Huxley never
made such a claim. In fact, he made the exact opposite claim. Here is the original quote:

A little calculation demonstrates how incredibly
improbable the results of natural selection can be when enough time is
available. Following Professor Muller, we can ask what would have been
the odds against a higher animal, such as a horse, being produced by
chance alone: that is to say by the accidental accumulation of the
necessary favorable mutations, without the intervention of selection. (Evolution in Action, 1953, p. 45)

The calculated result is 1 x 10,000^1,000,000 (p. 46). I will not
bother with analyzing his method--there are fundamental flaws in his
approach, but they do not matter, because he is only trying to get a
ballpark picture, which is by his own admission ultimately irrelevant.
For naturally, he says "this could not really happen." But this number
does not have anything to do with natural selection--as he says, this
calculation is for the odds of producing a horse without natural selection. Thus, creationists are shamefully abusing this quote when they use it to claim that it refers to the odds with
natural selection--that is claiming the exact opposite of what Huxley
wrote. Instead, Huxley continues after this calculation to show how
"thanks to the workings of natural selection and the properties of
living substance which make natural selection inevitable" (p. 46) "rare
and abnormal events" become "common and normal" (p. 47) and "all
objections to a selectionist explanation of evolution that are based on
the improbability of its results fall to the ground" (p. 48).

One of the funniest examples of these kinds of statistics comes from Evolution: Possible or Impossible by James F. Coppedge
(this book is discussed in more detail below). It even shows the
typical path and lunacy of these things. Coppedge, on p. 234 of his
book, cites an article by Ulric Jelinek in Campus Challenge
(Campus Crusade for Christ, Arrowhead Springs, CA, Oct. 1961), which
claims that the odds are 1 in 10^243 against "two thousand atoms" (the
size of one particular protein molecule) ending up in precisely that
particular order "by accident." Where did Jalenik get that figure? From Pierre Lecompte du Nouy's book Human Destiny (1947, pp. 33-4), who in turn got it from Charles-Eugene Guye,
a physicist who died in 1942. Guye had merely calculated the odds of
these atoms lining up by accident if "a volume" of atoms the size of the
Earth were "shaken at the speed of light." In other words, ignoring all
the laws of chemistry, which create preferences for the formation and
behavior of molecules, and ignoring that there are millions if not
billions of different possible proteins--and of course the result has no
bearing on the origin of life, which may have begun from an even
simpler protein. This calculation is thus useless for all these reasons,
and is typical in that it comes to Coppedge third-hand (and thus to us
fourth-hand), and is hugely outdated (it was calculated before 1942,
even before the discovery of DNA), and thus fails to account for over
half a century of scientific progress.

Harold Morowitz

Scientific ignorance also leads to the abuse of such citations, and
you have to carefully pay attention to context. Coppedge, for instance,
also cites (on p. 235) Harold J. Morowitz, Energy Flow in Biology
(p. 99), who reports that (paraphrased by Coppedge) "under
'equilibrium' conditions (the stable state reached after initial
reactions have balanced), the probability of such a fluctuation during
Earth's history would be...1 chance in 10^339,999,866." In particular,
this is "the probability of chance fluctuations that would result in
sufficient energy for bond formation" needed to make a living cell. This
statistic is laughable not only for its outrageous size, but for the
mere absurdity of anyone who would bother to calculate it--but what is
notable is that it has nothing to do with the origin of life. For notice
the qualification: these are not the odds of the first life forming,
but the odds of enough energy being available for any life to
grow at all, in an environment which has reached an effective state of
thermal equilibrium--a condition which has never existed on Earth. It is
obvious that in an equilibrium state, with no solar or geothermal
input, it would be impossible for life to gather enough energy to go on.
Who needs to calculate the odds against it? Morowitz was demonstrating a
fact about the effects of maximized entropy on a chemical system, not
the unlikelihood of life originating in a relatively low entropy
environment like the early or even current Earth. The fact is that life
began in, and has always enjoyed, an active chemical system that is not
only far from equilibrium, but receiving steady energy input from the
sun and earth. So this statistic has no bearing on the question of the
odds of life.

Fred Hoyle and N.C. Wickramasinghe

The most commonly cited source for statistical impossibility of the origin of life comes from another odd book, Evolution From Space, written by Fred Hoyle and N.C. Wickramasinghe (Dent, 1981; immediately reprinted by Simon & Schuster that same year, under the title Evolution From Space: A Theory of Cosmic Creationism). The statistic 10^40,000 is calculated on p. 24 (Hoyle repeats the exact same argument on pp. 16-17 of The Intelligent Universe
(1983)). A twenty-amino-acid polypeptide must chain in precisely the
right order for it to fit the corresponding enzyme. Although Hoyle does
not state it, this would entail that there must have been a minimum
specificity, of one specific possibility, for the first enzymic life, of
10^20, a value to which Hoyle himself says "by itself, this small
probability could be faced" (and this statistic even fails to account
for that fact that any number of "first enzymic organisms" are possible,
and not just one as his calculation assumes). Hoyle then goes on: "the
trouble is that there are about two thousand enzymes," (in "the whole of
biology," p. 23), "and the chance of obtaining them all in a random
trial is only one part in (10^20)^2000 = 10^40,000..."
There are three flaws in this conclusion: he assumes (1) that
natural selection is equivalent to random shuffling, (2) that all two
thousand enzymes, all the enzymes used in the whole of biology, had to
be hit upon at once in one giant pull of the cosmic slot machine, and
(3) that life began requiring complex enzymes working in concert. As for
(1), I address this mistaken idea throughout my critique of Foster. To put it in a nutshell, natural selection is not random, but selective,
a distinction that is not trivial (a point made by Sagan above). As for
(2), Hoyle leads his readers to believe that every living organism
requires or uses all two thousand enzymes, but he leaves himself an out,
for when he claims this, he uses the words "for the most part" (p. 23).
In other words, some life, probably the simplest, uses less. Since
biologists consider all present life to be far more advanced than
early life, even if all presently living organisms required two
thousand enzymes it would not follow that the first life did. It almost
certainly did not. As for this point and (3), see Addenda C.
For a good introduction, with numerous recommended readings, on the
current state of the science of biochemical origins, see Massimo
Pigliucci's "Where Do We Come From?" in the Skeptical Inquirer (September/October 1999).
Hoyle and Wickramasinghe also wrote another book together using
pretty much the same arguments, although with less vigor, entitled Our Place in the Cosmos: The Unfinished Revolution (again by Dent, 1993). Fred Hoyle also published an independent work, Evolution From Space (The Omni Lecture) and Other Papers on the Origin of Life
(Enslow; Hillside, NJ; 1982). This lecture is a much better work than
Foster's, but it suffers from some similar faults. For example, Hoyle's
opening assumptions imply that life must have begun as a chain of 100
proteins. There is no way he could know this. But it is also not even
close to what scientists actually think [see Addenda C].
And like Foster, Hoyle thinks that a preponderance of deleterious vs.
beneficial mutations must doom evolution to failure. This is quite
false, and I address this mistaken notion in Chapter 8 of my review of Foster, and my mathematical calculations prove the point further in Chapter 9.
Jeff Lowder pointed out to me another classic example of how stats
like these float through several layers of sources: David Noebel in Understanding the Times (p. 328) quotes Luther D. Sunderland's book Darwin's Enigma (1984, p. 60), who in turn references a November 1981 New Scientist article by Hoyle ("The Big Bang in Astronomy" pp. 521-7). Noebel quotes Sunderland: "[Hoyle] wrote in the 19 November 1981 New Scientist
that there are 2,000 complex enzymes required for a living organism but
not a single one of these could have formed on Earth by random,
shuffling processes in even 20 billion years." Note that Sunderland
actually gets the argument wrong: Hoyle specifically says in his book that we can
get one by random processes, but it is all 2000 together that is
supposedly impossible. I checked the original article, and Hoyle does
not mention the numbers 2000 or 20 billion there, so Sunderland clearly
had Hoyle's book on hand, yet he ignores Hoyle's concession there and
instead combines these claims with a different claim made in the
article.
In the article, Hoyle says this: "the combinatorial arrangement of
not even one among the many thousands of biopolymers on which life
depends could have been arrived at by natural processes here on the
Earth" (p. 526). He never explains what he means by a biopolymer, or how
he arrives at this conclusion. Instead of presenting facts and
mathematical analysis, he simply declares that the odds against this are
like 10^50 blind men all solving the rubix cube (with odds against each
success being 4x10^19) at exactly the same time. Thus, the odds against
"arriving by random shuffling of just one of the many biopolymers on
which life depends" is, if we complete the math that he didn't do,
4x10^950 (p. 527). This figure finds no support in any of his books, and
as far as I can see has no basis whatsoever--indeed, it directly
contradicts what he says in his book published in the same year, and in
all of his books published since (see above).

John D. Barrow and Frank J. Tipler

There has been another attempt to develop a statistical
demonstration of the improbability of man evolving, and it suffers from
the same central flaw that Foster's work suffers from. In The Anthropic Cosmological Principle (Oxford, 1986), John D. Barrow and Frank J. Tipler
exhaust over 600 pages trying to prove their point, yet a single
sentence is sufficient to destroy their whole project: "The odds against
assembling the human genome spontaneously," argue the authors, "is even
more enormous: the probability of assembling it is between
(4^180)^110,000...and (4^360)^110,000....These numbers give some feel
for the unlikelihood of the species Homo sapiens" (p. 565). They fail to realize that this is a non sequitur,
as already noted by Sagan, for it only establishes such an unlikelihood
if we assume, borrowing from their own words "spontaneous assembly."
But no one has ever claimed this of the human genome, and the facts
establishing evolution demonstrate that this absolutely did not happen.
Thus, like Foster and Hoyle, Barrow and Tipler completely ignore the
fact of evolution and the role of natural selection in their
calculation, and consequently their statistic (which has already been
cited by Craig in a debate with Draper) has absolutely no relevance to
the real question of whether man evolving is improbable.
They produce one other statistic of this sort, stating that "if we
take the average gene to have 1800 nucleotide bases...then 180 to
360...are immutable for each gene" so that "the odds for assembling a
single gene are between 4.3 x 10^-109 and 1.8 x 10^-217" (p. 565).
However, the first life would begin as the smallest replicator, not an
"average" one, so that this statistic tells us nothing about the odds
against life forming. This statistic also assumes that only one gene of
such a length would be viable--but we know that certainly there are
billions of different viable genes of such a length (for all we know, all
genes are viable in the right circumstances), so this statistic does
not tell us the odds against random assembly of a gene (it only tells us
the odds against a specific gene, ignoring that many may work), and thus is useless even on that count.

James F. Coppedge

One of the most valiant attempts at this sort of statistic is to be found in the creationist book Evolution: Possible or Impossible by James F. Coppedge
(Zondervan, 1973). Unlike most other tries, Coppedge at least attempts
to get at the root of the problem by examining the odds of the first
theoretically possible organism arising by chance. But he fails largely
because of certain bogus assumptions, which are only partly to be blamed
on the fact that his work is twenty-five years out of date. As an
example of being out of date, he declares that "there is no method
known" whereby reproduction can occur without the "intricate
DNA-RNA-enzymes-ribosome process" (p. 67) but this has since been
refuted (see Addenda C).• As an example of
simply using faulty logic, Coppedge says that "the average number of
amino acids in proteins of the smallest known living thing is 400, at
the very least" (p. 57). This commits two mistakes: first, when
discussing the first possible life, we should only be concerned with the
minimum, the smallest possible protein that can exist in a replicating
system, not the average; second, the "smallest known living thing" is
already billions of years more advanced than the first life, which is
almost certainly extinct. Coppedge claims that "there is no real reason
at present to believe that any living thing has ever existed that is
simpler than the...smallest living entity known" (p. 112). But this is
the exact opposite of the truth. There are many reasons to think
otherwise. It has been estimated that over 99% of every known species
that ever lived has gone extinct, and the simplest of organisms would
surely have been devoured or starved out by their more advanced
descendants long ago. Scientists have catalogued numerous other good
reasons to think that DNA is a late development, and that all surviving
phylogeny is descended from a common ancestor, which we clearly have not
yet found among the living. Thus, we cannot use present life as a basis
for calculating the odds of the random formation of the first
life, and this is even more so when it comes to bacterium, which we
already know is highly evolved, cf. "Evolution of Bacterial Genomes" by
Trevors, Antonie van Leeuwenhoek International Journal of General and Molecular Microbiology 71:3, pp. 265-270 (March, 1997).
Coppedge makes some specific calculations, and both kinds of
problems plague his results. For example, he calculates the odds against
the origin of life as 1 in 10^8318, which is, he says, "out of all the
protein molecules that ever existed on Earth, the odds against there
being even one set with only left-handed components sufficient for the
smallest theoretical living entity" (p. 76). But his "smallest
theoretical living entity" is something derived from the 1969 work of a
certain Dr. Harold J. Morowitz ("Biological Self-Replicating Systems," Progress in Theoretical Biology,
F. Snell, ed., pp. 35 ff.), which is what Morowitz believed to be the
smallest DNA-based genome that can sustain itself, consisting of 239
proteins, with an average of 400 amino acids per protein. His
conclusions were largely arbitrary and have since been refuted
experimentally (see Addenda C).• Likewise, the
problem of uniform one-handedness has many possible explanations. It is
not necessarily a matter of pure chance. Since uniform handedness is
important to biological function, it will be selected for, and thus even
if we are relying entirely on chance for the first organism (which will
be much smaller than Coppedge thinks, and so the odds will be vastly
better than he estimates), we no longer have to explain uniform
handedness after that. Furthermore, it is possible that the first living
polymers grew on the surface of clays or crystals, which would create a
natural tendency for all links in the chain to have the same
handedness--and these circumstances have other ordering effects: as
Yockey states on p. 18 of his article cited above,
studies show that sequences of amino acids formed on clays or crystals
are demonstrably non-random. But most importantly, as even Coppedge
knew, "an all-one-handed chain is...more stable" (p. 249) and
consequently short one-handed chains will last longer than mixed chains,
and thus will more likely link with more chains and grow, remaining
even more stable as the chain gets larger. In other words, only
one-handed chains are likely to naturally grow very long, and thus it is
no longer a question of random chance, but natural tendency and, again,
natural selection (for even more on this whole issue, see [2]).
Thus, this entire statistic is no longer relevant. It is further flawed
by the fact that in this example he assumes that only one arrangement
of proteins and amino acids will work (Morowitz did not claim this), and
that the twenty amino acid types that are eventually standardized in
Earth biology are the only ones that can produce life, and that all
twenty are required, but there is no reason to assume either [cf. n. 1a].
His calculations even assume that an increase in the number of possible
amino acid types will decrease the odds of forming a reliable
replicating genome, when it should be obvious that the opposite must be
the case: if more materials are available, the more chances there will
be of hitting on something that works. Thus, even his math, like
Foster's, is not up to the actual task here.
On page 102 Coppedge calculates the odds against proinsulin forming
by chance as 1 in 10^106. But this is the same old mistake of assuming
that proinsulin formed spontaneously. There is no reason to think that
it did. Like the others, he ignores the role of natural selection. So
this statistic is not relevant to the origin of life or the evolution of
any animal. He then goes on to calculate the odds against "getting
[even a single] usable protein" as 1 in 10^240 in one try (p. 104), or 1
in 10^161 "in all the history of the Earth" (p. 109). This statistic is
based on several bogus assumptions. He bases his math again on the
premise that such a protein must have an average length of 400 amino
acids, rather than the minimum possible length (which no one knows),
even though for the first life it is the minimum, not the present
average, which must be considered. He also assumes that the rate of
meaningful word formation in random English letter generation is the
same as the rate of meaningful replicating protein code formation,
though there is absolutely no rationale for this. Why would protein
replication have anything to do with the English language? Indeed, there
are only around 100,000 meaningful words in English, despite 26
letters, yet with only four letters, or twenty amino acids, there are
billions of meaningful "words" in protein replication. What Coppedge
also fails to appreciate is that anything that reproduces itself
is "useful" for the purpose of natural selection--even if it has no
observable effect on an organism. Thus, his statistics, again, are
useless.
Based on these bogus results, Coppedge eventually concludes that
"the odds against one minimum set of proteins happening in the entire
history of the Earth are 10^119701 to 1" (p. 111). This is based on all
the previous flawed assumptions: the "minimum" 239-protein genome of
Morowitz, the 400 amino acids per protein, the assumption that only one
combination will work, that only (and all) twenty amino acid varieties
are required, and that only 1 in 10^240 randomly-made proteins are
"usable," yet all these assumptions are invalid or have since been
refuted (as discussed several times above), and thus his final result is
to be tossed in the garbage. Coppedge later tries giving the best odds
(p. 113), by assuming instead that only ten kinds of amino acids are
needed, with only 12 amino acids as a minimum protein size, and ten
proteins as a minimum replicator size, and various other assumptions
about chemical quantities and combination rates, to get a chance against
this creature forming by chance of 1 in 10^35 in the history of the
Earth. Of course, this is actually within the realm of cosmic
possibility [see 1],
and so does not disprove the natural origin of life. But it is still
not a valid result, since it assumes that only one arrangement would
work (or very nearly that--he allows one meager amino acid substitution
per protein), and that it can be done with only ten of the thousands of
amino acid types. Indeed, even he knows that at least twenty
work, and if an organism can be made with any ten out of twenty possible
types, this changes the odds greatly, yet he does not account for even
this. So this statistic is useless, too.
Coppedge makes one final calculation: the odds against randomly
forming a single gene in one shot are, he figures, 1 in 10^236, based on
the assumption that a gene requires a chain of at least 1200 amino
acids (compare this with Salisbury's assumption of 1000 per gene). But
to calculate the number of possible combinations which would produce a
viable "gene" he once again uses the rate of meaning for random English
letter-combinations, a thoroughly invalid analogy, and he dismisses the
possibility that genes were not the basis of the first life but rather
the result of several independent organisms chaining together (just as
happened in the change from single-celled to multi-celled organisms, and
as possibly happened in the adoption of a cellular nucleus, and the
mitochondrium, etc.). Since these assumptions invalidate his results,
and since current science suggests much simpler possibilities (see Addenda C), his conclusion can be safely rejected again.

Walter Bradley and Charles Thaxton

Yet another attempt, one of the most sophisticated yet, has been published. Walter Bradley and Charles Thaxton wrote "Information Theory and the Origin of Life" in The Creation Hypothesis: Scientific Evidence for an Intelligent Designer
(J.P. Moreland, ed., InterVarsity Press, 1994, pp. 173-234). But they
still commit the same fallacies as always. For example, as a start to
their project, they tell us that "if a protein had one hundred active
sites, the probability of getting a proper assembly would be...4.9 x
10^-191" (p. 190). Of course, they do not mention that this is only true
if the first replicating protein had to be exactly, and only, one
hundred amino-acids in length, and if only one exact protein could get
life started. When we factor in the possibility that millions of
possible proteins of dozens of different sizes might do it, the odds for
life starting this way are not so grim. But they make no attempt to
account for this, so their statistic is useless. But this number also
assumes that only, and exactly, twenty amino-acid types must be
involved--but since there are thousands of types, and for all we know
any combination of any of them may have begun a replicating life-form
(the fact that our phylogeny ends up with these twenty is, after all,
most likely chance, not necessity), it follows that this assumption of
twenty kinds, no more and no less, also invalidates their statistic.
Bradley and Thaxton have their own book, The Mystery of Life's Origin
(1992), which I have read, and it contains no improvements--it is
essentially the exact same argument, with the same flaws. In that book
it boils down to this: they "assume that we are trying to synthesize a
protein containing 101 amino acids" and then determine "the inverse of
the estimate for the number of ways one can arrange 101 amino acids in a
sequence" and get 1 x 10^-117. But they again assume in this
calculation that exactly 20 types of amino acid must be involved, no
more or less (p. 145). Then they argue that "this ratio gives the
fraction of polypeptides that have the right sequence to be a protein"
but this is only one protein, even though, again, any number of
others might be equally sufficient. They take this useless number and
estimate that "the number of polypeptides that would be formed during
the assumed history of the earth would be...10^72" so "the probability
of producing one protein of 101 amino acids in five billion years is
only 1/10^45" (p. 146). But this is within the realm of the possible
[see 1].
So they try to make it harder by considering the odds of all these
acids being left-handed, bringing the odds up to 1 in 10^175 (p. 157),
but as I note above in my review of Coppedge
above, there is little reason to include this factor, since
all-one-handed chains are actually more likely than mixed chains to
endure for longer periods, and to form on crystal surfaces, a fact they
fail to consider [see 2].

Gerald Schroeder

Another recent attempt at this sort of thing appears in The Science of God: The Convergence of Scientific and Biblical Wisdom by Gerald Schroeder
(Free Press, 1997), and this is the first example of the problem being
approached with the correct math, though still using bad assumptions.
Indeed, as we will see, he even proves the Darwinian case--but then
tries to turn this around against Darwinism by trumping up bogus
"concessions" that he claims to have made in his math, which, according
to him, render his own conclusions implausible!
Before addressing that rather unique argument, I will address a
statistic alluded to on pp. 91-2, where he examines a single gene
involved in eyes, Pax-6, which is 130 amino-acids long. With the
20-amino-acid system used by terrestrial life, there are 10^170
different ways a protein of this size could be arranged, and as he sees
it, it was arranged on five separate occasions, so that the odds against
this are, he says, 1 in (10^170)^5, or 1 in 10^850 (p. 92). That this
gene was independently evolved five times is questionable even on the
facts (see, for example, Ma'ayan Semo's discussion
). What is most important is that even Schroeder agrees that this
statistic is irrelevant, though you would never know it until reading
many pages on: on page 109 he says the truth lies somewhere between this
number and the results of Richard Dawkins' computer simulations. The
latter method calculates how long it takes for random mutations and
non-random selection and reproduction to produce a given result (similar
to what I did, manually, in chapter 9 of my review of Foster). So
anyone who cites the above number as fact is not reading Schroeder
carefully.
Of course, it is true that natural selection doesn't aim for long
term goals as are assumed in both the Dawkins examples and my own
calculations in the Foster review. Even Schroeder recognizes this. In
nature we can see, in retrospect, that all the actual conditions added
up to a long term result, so that it appears to have had a long term
goal in mind. But this is an illusion. It is the stepwise advantages
that determine the course of evolution. Thus, we cannot know how many
paths of beneficial mutations would lead to Pax-6, and we also cannot
know whether any particular lineage of animals will follow any of
the paths to Pax-6 or anything like it. People like Schroeder think
this makes the evolution of Pax-6 unlikely. But this cannot be known.
Since advantage-providing mutations will almost always be selected and
copied, the end result will always be more complex and adaptive than
before, so that calculating the probability of achieving one end result,
whether by Schroeder's or Dawkins' or my own methods, will actually
tell us nothing about whether that result would or would not have
happened naturally. But the methods of Dawkins and myself do prove that
such a result is possible given the right conditions, and thus to
argue against us requires proving that those conditions did not obtain.
But the fact is that some complex long-term result will happen
naturally, no matter what it turns out to be. Even Schroeder does not
refute this. Although any one arrangement of stars in the sky will be
just as improbable as any other, nevertheless the odds of the stars
having one of those arrangements are 100%. Thus, the mere improbability
of an arrangement tells us nothing about whether it is a possible result
of any given process. But Schroeder, like other creationists, keeps
trying to articulate just such an irrelevant argument.
Beginning on page 109, Schroeder starts off in the right direction,
and his assumptions are worth reviewing by all other creationists.
Though his mathematical model is not quite the correct one to use here,
it is better than most, and he lists considerations that are
sophisticated enough to show how all other attempts at this are
hopelessly over-simplified (see also his improved remarks on pp. 120-1).
Even though he never generates a number for his readers to cite, I will
address his conclusion of improbability, because it is often cited now,
yet it is so vacuous. Indeed, his own math leads him to the conclusion
that, in fact, "the convergent organ appears within the time frame
presented by the fossil record" (p. 111). In other words, he concludes
that it is possible after all, when all the proper considerations are
made. So how does he turn this around? Not satisfied with having
falsified his own theory, he argues that his results are implausible
because "we have boosted the rate of gamete mutations a hundredfold over
the highest rates currently reported, while maintaining the conditions
that no mutations were fatal and all proper mutations were locked in"
(i.e. not lost by later mutations; p. 112). This, he says, "stretches
plausibility beyond its limits." Does it? He has presented an empty
argument. Each of these so-called plausibility-stretching "concessions"
to Darwinism are no such thing:

First of all, as for his increasing the rate of mutations a
hundred fold, that is a red herring--for one could just as easily
increase the population a hundredfold and the result will be exactly the
same. Yet he assumes a population of 100,000. But a population of ten
million is actually closer to reality for all small life forms--it is
practically a reality even for humans! Note that a "population" here is
not the total sum of all members of a species, but refers to a community
in which mating is likely to occur at some point among all member
families. So he has actually not stretched plausibility by assuming 10
mutations per mating (although such large leaps are possible when
erroneous chromosome mixing occurs). He has rather stretched
plausibility in the other direction by assuming a tiny population of
only 100,000. Thus, he has actually made no concession at all to
Darwinism.

Secondly, fatal mutations, no matter how frequent, would not
affect any of his calculations, so this is also no concession to
Darwinism. Since by being "fatal" they are not passed-on and are
immediately lost, it is a moot point that he did not include them. It
doesn't even seem to occur to him that this must be going on--for he
does not add up all the mutated offspring that die in the womb or
shortly after birth, or even before puberty, which will have no effect
at all on his numbers, since they are not counted in the constant
population. He also assumes that each pair of parents bears only two
successful children, a hopelessly implausible assumption to make of
simple life forms. Even if fatal mutations outweighed beneficial ones by
a billion to one, only beneficial or neutral mutations will ever be
passed on. And at any given time there will be millions, indeed even
trillions, of maximally adapted organisms, all of whose offspring is
subject to mutation. All we need is one of them to be beneficial in
order for Schroeder's plausibility-stretching "condition" to be not only
plausible, but fact. Thus, once again, his "concession" is fictitious.

Thirdly, his idea of "lock-in" is also no concession, for as
with the stars example, if the genome heads in a different direction, it
will end up somewhere else just as complicated. Thus, "lock in" has no
bearing on the odds of there being a complex final result, since the
odds of there being such a result are always 100%. And he cannot
calculate the odds of a specific end result and take that as the odds of there being any
end result, just as he cannot take the odds of the stars being in their
actual positions and use that as proof that their arrangement is so
improbable that god had to put them there. This is a giant non sequitur.
Since the end result is what we observe to have actually happened,
lock-in, on all the genes that did not mutate between their initial and
end conditions, is proven to have been the case--except, of course, for
all those creatures whose lineage broke off and headed in a different
direction, a feature of this issue he ignores entirely.

So Schroeder's own conclusion, that evolution is actually plausible,
by reason of mathematical proof, remains unrefuted, even despite his
irrational claim to the contrary. This must be one of the few cases in
history where a man very competently proves himself wrong, then claims
he is right.
Later, Schroeder applies his math to the evolution of humans and
chimps from a common ancestor. By claiming that "500,000 generations are
required for an 83% probability that the first of the 70,000 mutations
will have occurred" (p. 123), he concludes that hundreds of millions of
generations are needed to complete the task, though this is too much for
a seven-million-year change. But he is mistaken. First of all, his math
is wrong, because he assumes that the odds against a specific sequence
of specific genetic changes is the same thing as the odds against any
sequence of any genetic changes. This is the "configuration of the
stars" fallacy all over again. But even if his mathematical model were
correct, his conclusion is wrong: the number of "generations" he
calculates to be necessary is not inconsistent with a
7-million-year time span. For when he says "generations," his equations
only calculate the number of mutant offspring that are necessary, and not, in fact, the number of generations. Let's examine his math:
He argues that the odds against a needed mutation are 279,999 /
280,000. How he derives this is complicated, largely arbitrary, and
hardly justifiable (see p. 122), but we will not challenge the many
problems with the way he is using this mathematical model, or the
assumptions he is making. It follows even from his stated assumptions
that this is the improbability of a mutant child having a mutation in
the right place, not the improbability of such a child existing in any
given generation. When we consider a steady population of 100,000, and a
birth-rate of five children per parental pair, we get 500,000 copies
per generation (of which only 100,000 will be able to survive, if we
assume that is the limit of the environment). If we assume a rate of
mutation of 1 in 100 births (the frequency of Down syndrome alone comes
close to this), we will have 5000 mutants per generation. He assumes one
generation to be seven years (but multiplies this by 500 for bogus
reasons addressed below). With the assumptions I have given, his own
equation produces P = 1 - q = 1 - (279,999/280,000)^5000 = 1.8% as the
probability that a mutation will occur in the right place and get things
started, in only a single generation, not in 5000 generations as
he would argue. If 70,000 mutations are needed in 7 million years, with
seven years to a generation, there are a million generations, with one
mutation needed every fourteen generations or so. That gives us 70,000
mutants to play with (5000 mutants per generation, times 14
generations). This gives P = 1 - (279,999/280,000)^70,000 = 22%. In
other words, even using his own math, it is easily proven to be more
than possible for the needed changes to have occurred in the given time.
So how does he torpedo this? By inserting, once again, bogus
assumptions:

He claims that mutation rates in human gametes are fewer than 1
in 10,000 births, using that for all his following equations (p. 121),
thus matching his population estimate of 10,000 (see below), so he can
claim only one mutant per generation. But the mere fact that a single
kind of mutation, Down syndrome, occurs once in every 700 births, is
proof positive that he is importing the most bogus of assumptions here [3].

He also assumes a breeding population of ten thousand, ignoring
the fact that all we need is one positive mutation in any one of all the
existing maximally adapted members of a species which can mate with any
other similarly-adapted members, for evolution to proceed. Limiting his
math to a population of ten thousand is thus absurd. It is tantamount
to saying that there are only ten thousand maximally adapted members of
any given species in one geographical region at any given time. But most
suspiciously, it deviates from his previous practice of assuming a
population of 100,000. His reasoning is on the right track, however--but
the solution is not a flat number, but a logarithm (see below).

Schroeder also uses compound interest formulas to conclude that a
mutation granting a 1% advantage in reproductive success [although see 4]
will take 500 generations to become dominant in a herd of 10,000. But
this is the wrong math. Since the generations are competing for
survival, and only 10,000 of every generation can win, the rate at which
an advantaged family will gain ground is much greater than a compound
interest rate (which assumes everyone wins). Since descendants are
fighting for a limited number of slots, a small advantage will quickly
become a windfall. In this case, each line of descent, accidents aside,
will win an equal number of slots, leaving two families per generation,
until the mutant comes along. With a 1% advantage this mutant will gain
one additional slot within 6 generations, leaving three families instead
of the two of its competitors, and then with geometric progression the
mutant line, accidents aside, will attain all 10,000 slots in only 19
generations (merely three centuries for humans). In fact, this dominance
will be gained so quickly that it will be barely visible in the fossil
record. That is why punctuated equilibrium is the observed pace of
evolution. And even with larger base populations, as long as the
populations are not isolated (causing them to evolve on different
paths), due to geometric progression even a tenfold population increase
adds a mere four or five generations to the total time it can take for a
mutant to gain dominance. If we consider this in his example, we do not
multiply generations by 500, but only 25 (at worst). Of course,
dominance in a population can happen even faster than this, if radical
changes in the environment lead to huge extinctions in competing
families. And we know this happened in the period of transition from
proto-ape to the humanoid line of descent. Thus, even Schroeder's
assumption of a mere 1% steady advantage does not reflect the reality.

It is not even necessary for dominance to be completely gained
in a population for evolution to proceed, and this is a fundamental flaw
in Schroeder's assumptions. Mutations keep happening, even before an
organism has grown in numbers, and since only beneficial mutations
matter (fatally mutated individuals will die off and have no effect on
the growth of the population--since parents always bear far more
children than can live), many steps of mutation can occur even during
the twenty or so generations it takes to dominate a population. This is
one reason why populations so easily branch off into different
evolutionary paths. This is something Schroeder does not account for. He
assumes that the next step can only occur after the first
mutation has dominated the population. It is true that we only get to
count the total population for potential mutations after the previous
mutation is general throughout the population. But we still get to count
the population of the mutants for every intervening generation as that
population takes over the general population. This requires logarithmic
calculations far beyond the scope of anything Schroeder does.

Ilya Prigogine

Schroeder cites Ilya Prigogine as stating in Physics Today
(in "Thermodynamics of Evolution," a two-part article spanning November
and December of 1972) that "the idea of the spontaneous genesis of life
in its present form is therefore improbable, even on the scale of
billions of years." This is inherently suspicious, since Prigogine is
famous for proving that order results from increasing entropy in
dissipative systems, rendering spontaneous complexity more probable than
ever before. So what is the context of this quote? As it happens, it
appears in the introduction to that famous research article
demonstrating that dissipative systems explain pre-biological evolution
and in fact almost all order, even functional order, in organic systems,
as a "purely deterministic" consequence of the laws of physics (ibid.
Dec. p. 44). In other words, Schroeder is quoting a contrafactual and
pretending it is a conclusion: the authors (Ilya is only the lead author
of three, the others being Gregoire Nicolis and Agnes Babloyantz) first
set out the challenging problem (the fact that the present theory does
not account for biogenesis), which Schroeder quotes, then present the
solution, which is mathematically and experimentally proven (and not
only accounts for biogenesis, but also explains many other previously
unexplained features of living organisms).
Schroeder is thus guilty of deception, like those who quote Carl Sagan out of context (as I have discussed above).
Ilya Prigogine and his colleagues actually prove, in that very article,
that "the spontaneous genesis of life in its present form" is probable,
not improbable. Indeed, their work demonstrates that given any
"fluctuations" sufficiently far away from thermal equilibrium in a soup
of polynucleotides (and perhaps also polypeptides), the development of
complex reproducing systems is guaranteed, as a result of
naturally-occurring "autocatalytic cycles" (ibid. Dec. p. 38), in just
the same way that organized convection cycles arise naturally when water
is heated (a state which is also very far from equilibrium). In a
nutshell, the article comes to the exact opposite conclusion as Schroeder leads us to believe. So either Schroeder is trying to pull a fast one, or he did not read the article.

Murray Eden and the Wistar Institute

Schroeder cites a Wistar institute conference as showing evidence
of the improbability of evolution. The symposium was transcribed from
audio and published in 1967 as Mathematical Challenges to the
Neo-Darwinian Interpretation of Evolution, a Symposium Held at the
Wistar Institute of Anatomy and Biology April 25 and 26, 1966, Paul
Moorhead and Martin Kaplan, eds. Needless to say, this is quite out of
date. Worse, it does not support Schroeder at all. Only one paper comes
anywhere near proposing that the origin of life and subsequent evolution
is improbable: Murray Eden, "Inadequacies of Neo-Darwinian Evolution as
a Scientific Theory" (pp. 5-20). He does not really argue that
evolution is improbable, but rather that no present theory accounts for
certain peculiarities of life on earth, especially the fact that all
living organisms are composed of a very tiny fraction of all the
possible proteins.
In particular, Eden argues that given all "polypeptide chains of
length 250 [amino acids] or less...There are about 20^250 such words or
about 10^325" (p. 7). This number is ripe for quoting, but it does not
stand as the odds against life, and even Eden did not even imply such a
meaning--to the contrary, he admits that perhaps "functionally useful
proteins are very common in this space [of 10^325 arrangements]," and
facing tough criticism in a discussion period (where his paper was torn
apart, pp. 12-9) he was forced to admit again that perhaps "there are
other domains in this tremendous space which are equally likely to be
carriers of life" (p. 15). But his main argument is that life is
concentrated around a tiny fraction of this possible protein development
"space" and we have yet to explain why--although his critics point out
why in discussion: once one system involving a score of proteins was
selected, none others could compete even if they were to arise, thus
explaining why all life has been built on one tiny set of proteins. One
thing that even his critics in discussion missed is the fact that his
number is wrong: he only calculates the number of those chains that are
250 acids long, but he refers to all those and all smaller chains, and
to include all of those he must sum the total combinations for every
chain from length 1 to 250. Of course, the number "250" is entirely
arbitrary to begin with. He could have picked 100, 400, or 20. He gives
no arguments for his choice, and as we have seen, this can have nothing
to do with the first life, whose chain-length cannot be known or even
guessed at [5].
Among the huge flaws in Eden's paper, pointed out by his critics,
is that he somehow calculates, without explanation, that 120 point
mutations would require 2,700,000 generations (among other things, he
assumes a ridiculously low mutation rate of 1 in 1 million offspring).
But in reality, even if only 1 mutation dominates a population every 20
generations, it will only take 2400 generations to complete a 120-point
change--and that even assumes only 1 point mutation per generation, yet
chromosome mixing and gene-pool variation will naturally produce many at
a time, and mix and match as mating proceeds. Moreover, a beneficial
gene can dominate a population faster than 20 generations, and will also
be subject to further genetic improvements even before it has reached
dominance. I discuss all of these problems in my analysis of Schroeder above.
But in the same Wistar symposium publication, C. H. Waddington (in his
"Summary Discussion") hits the nail so square on the head that I will
quote his remarks at great length:

The point was made that to account for some
evolutionary changes in hemoglobin, one requires about 120 amino acid
substitutions...as individual events, as though it is necessary to get
one of them done and spread throughout the whole population before you
could start processing the next one...[and] if you add up the time for
all those sequential steps, it amounts to quite a long time. But the
point the biologists want to make is that that isn't really what is
going on at all. We don't need 120 changes one after the other. We know
perfectly well of 12 changes which exist in the human population at the
present time. There are probably many more which we haven't detected,
because they have such slight physiological effects...[so] there [may
be] 20 different amino acid sequences in human hemoglobins in the world
population at present, all being processed simultaneously...Calculations
about the length of time of evolutionary steps have to take into
account the fact that we are dealing with gene pools, with a great deal
of genetic variability, present simultaneously. To deal with them as
sequential steps is going to give you estimates that are wildly out."
(pp. 95-6)

Clifford Wilson and John Weldon

A source of many statistical citations on the impossibility of life is a bizarre one indeed: Clifford Wilson and John Weldon, Close Encounters: A Better Explanation
(Master Books, San Diego, 1978, cf. pp. 320-3), a Christian attack on
UFO culture which assumes that UFO's are proven to exist, and argues
that "they are part of the pattern of signs and wonders of the 'last
days' as Satan takes on his prophesied role as an angel of light,
seeking to deceive the elect (see Matthew 24:24)." To them, UFO's are
the manifestations of demons. Part of their argument is based on the
"proof" that aliens can't exist, because life is too improbable to have
been created by anything but god's design, and in an appendix "Life by
Chance in Outer Space: Possible or Impossible" they cite various
statistics against the origin of life. They do no original work,
however, and so I have addressed their sources directly in the essay
above (Salisbury, Sagan, Borel, Coppedge, and Morris).
Nevertheless, I have seen this book cited as a source of these
statistics, and so I include it here in case this book is ever cited as
if it were a primary source.

William Dembski

Another work which cites other sources is William A. Dembski's The Design Inference: Eliminating Chance Through Small Probabilities
(Cambridge University Press, 1998). This produces no original work on
the odds of life forming by chance, but cites several other sources on
this on p. 55. The sources Dembski cites are Salisbury and Wilson and Weldon, who have been addressed above.

Henry Morris

There is also the ever-famous book Scientific Creationism by Henry Morris
(Master Books, 1974, now past its 11th printing). This is littered with
bogus statistics, with all the same mistakes noted above. We have the
popular 100-amino-acid chain on p. 60-61, the odds of a chance
arrangement of exactly one of such a chain, which is, in his
calculations, 1 in 10^158, and the odds against this happening in the
known age and expanse of the universe, which are 1 in 10^53. Of course,
as I've said before, all of his assumptions invalidate the conclusions.
The assumption that life can only start with only and exactly 100 amino
acids chaining, and with only one arrangement of such a chain, is
obviously groundless. Likewise, he yet again assumes that only and
exactly a fixed 20 amino acid types are necessary, which is also
groundless (as we've seen). For his claim that 100 units is too small,
he cites Morowitz, whose conclusions I have refuted in my discussion of
Coppedge above.
Morris then draws on an engineer's largely arbitrary idea that 1500
sequential steps are needed to achieve a "protein molecule" (p. 64-65;
he cites the woefully outdated Marcel Golay, "Reflections of a Communications Engineer," Analytical Chemistry
33 (June 1961), p. 23). From this Morris calculates the odds against
this ever happening as 1 in 10^450. But his equations are totally wrong.
In fact, he makes exactly the same mistake as Foster. He does not
account for the three fundamental features of natural selection:
reproduction, mutation, and selection. He merely multiplies a sequence
of probabilities, which is not correct. See Chapter 9
of my review of Foster for more on the math Morris is supposed to use
here. Morris also assumes that only one sequence of 1500 steps will
begin life--but in fact there may be millions of different sequences
that will work, and there may be many different numbers of steps, and
any derivation of odds must sum the odds for all possibilities: i.e. the
odds for every possible number of steps, from 1 to infinity, and of
every arrangement of steps within each number of steps that will produce
a reproducing protein. This is impossible to know. Such a statistic
cannot be calculated, even using Morris' math.
Lastly, Morris comes up with the final figure of 1 chance in
10^299,843 against the evolution of life (p. 69). Where does he get
this? This is what the result is of multiplying the sequential odds
against a million mutations occurring in just such a way as to produce a
horse! He accounts for the size and scope of the universe, but, as
before, he uses the wrong math. His equation does not account for
natural selection and thus his statistic is irrelevant. Even worse, as
before, he uses one single, exact genome, which has no bearing on the general
possibility of life. Even by his own flawed assumptions, these are not
the odds against life evolving, but the odds against the horse
evolving--and not even that, but the odds against a specific, individual horse
evolving. He makes no attempt to account for the fact that there are
trillions and trillions of viable horse genomes, and trillions and
trillions and trillions of genomes which correspond to viable life forms
of any kind. One cannot simply calculate the odds against a single,
individual person being just as they are and use that as the odds
against any person, or any life of any kind, existing at all. Just
because every sequential arrangement of hands in poker has the same odds
of being dealt as a royal flush, it does not follow that the odds
against any hand being dealt are the same as the odds of dealing a royal
flush. This is bogus reasoning in the extreme.

Dean Overman

Dean Overman is a recent contender, having written A Case Against Accident and Self-Organization
in 1997. Overman cites many authors already refuted above, and his own
arguments are all vacuous attacks on straw men. In fact, he regularly
engages in deceptive omissions, rendering his work uncredible in the
extreme. For example, he cites (pp. 64-5) Bernd-Olaf Küppers (Information and the Origin of Life,
1990, pp. 59-60) on the impossibility of the random assembly of a
bacterium genome. But no one has ever argued or even thinks that this
has happened--bacteria are believed to be the highly advanced outcome of
millions of years of evolution from a much simpler beginning that is
unknown to us. There is certainly no reason to suppose bacteria are the
original life form. But Overman is deceiving his readers here, too,
because he is quoting Küppers out of context and blithely omitting the
fact that this is only the introduction of the problem, which even
Küppers himself acknowledges is fictitious--the rest of Küppers' book
then proves the quoted conclusion false, and shows that ordered
complexity can indeed arise from random selective processes. His proof
is mathematically rigorous and decisive, and is a much more refined
example of what I do in chapter 9 of my review of Foster (I even cited him as a precedent). Overman's failure to mention this to his readers demonstrates how little he is to be trusted.
Overman's only unique calculation is simply this: "the odds of an
accidental typing of [a 379-letter passage from] Shakespeare...is one in
10^536"(pp. 54-5) which commits the same error noted in chapter 9 of my review of Foster (who uses Wordsworth's poem Daffodils).
Anticipating the objection, Overman responds to analogies that include
selection forces by saying that "an invalid assumption is that the
'environment wipes out any wrong letter,' because this is the very
assumption which must be proved to show that random processes can
produce" the result. He claims "without any evidence, the term
'environment' is endowed with characteristics including powers of
intelligence to...'know what kind of organism is best and reject wrong
letters or sequences'" (p. 56). This is an embarrassing flight of
ignorance, or crafty distortion, I don't know which. Although it is true
that the selection analogies he criticizes are oversimplified (as even
their own authors admit, yet Overman doesn't tell his readers that), it
does not follow that his own totally random calculation is more
realistic. The truth lies inbetween, as even Gerald Schroeder
acknowledges (whom I critique above).
But Overman once again completely ignores this complexity, making his
analysis naive and useless--or deliberately deceitful, by omitting what
his readers ought to know.
Instead, Overman tries to use his straw man to argue that natural
selection assumes a "Superior Intelligence" (p. 57). The crucial phrase
where he departs from reality into his own fictitious version of things
is when he says above "without evidence." For in fact we have a great
deal of evidence that environments eliminate "wrong letters," since
genomes, and specifically mutations, that can't compete or survive are
eliminated, making room for those that are more aligned in their
organization with the demands of the environment. Overman thinks this is
"intelligent" because he assumes, quite wrongly, that only one outcome
is "correct." Thus, by ignoring the complexities behind the basic
selection analogies that he cites, he makes it appear that scientists
are arguing that "arriving at this Shakespearean sonnet" is the only possible outcome.
What Overman does not tell his readers is that it is more correct to say that any
legible and grammatically correct sonnet counts as a possible outcome,
not any specific sonnet. How Overman misses this point, which is
necessary to account for the immeasurable variety of genomes we observe
in nature, is unfathomable. He assumes that a natural process "has" to
end up producing, say, humans, but that in no way follows. The number of
possible outcomes, of viable living organisms, is endless, and nature
is not "intelligently" selecting only one or certain ones of these end
results. Rather, it is blindly selecting whichever ones happen to pop up
and that then survive in a given environment. There are no doubt vastly
superior human-like genomes, but nature chooses candidates by chance,
so the best patterns may never emerge (they are most unlikely to, in
fact). And once random candidates are created, how does nature choose
which will be reproduced? By eliminating those least able to endure and
compete in the given environment. Thus, no intelligence is required for
nature to create ever-more complex and robust organisms by chance and
natural forces alone. In the words analogy, nature produces paragraphs
more and more legible over time, without any rules except those set by
the environment's survivability, and in time millions of fully legible
paragraphs will be produced--maybe not a specific Shakespearean sonnet,
but that is not necessary, since some kind of legible sonnet will
result--in fact, millions of them. To then look at these sonnets and
pretend that they were the only possible outcomes, and therefore the
odds against producing them are too great to bear, is to engage the same
"stars in the sky" fallacy I mention several times above.
With that understood, examine the selection analogies he
criticizes. All assume that a certain set of environmental factors were
present--those factors which would be most likely to produce a specific
result (e.g. a human genome). This is a fair assumption for the purpose
of showing how such a sequence is possible, since the only way to
undermine the assumption is to show that the environmental conditions
were regularly contrary to the direction in which the genome was
shaped--but all evidence points to the exact opposite conclusion: every
development in the entire evolutionary tree presents itself as an
advantage within the context of some environmental change or opportunity
that preceded it, which can be confirmed in external evidence. Thus, it
is no more incredible to assume that all the conditions were just as
would be expected to mold a human genome from its earliest beginnings,
than it is to assume that all the conditions were just as would be
expected to produce a fat man named Winston Churchill in the highest
seat of British government during a world war with Germany exactly
twenty-two years after a previous such war. If we simply calculate the
bare odds of this, we could prove it impossible--yet it happened, and
without any intelligent intervention. Why? Because the sequence of
events was just so--and had it been any other way, so would the outcome
have been different, yet equally amazing in the complexity of the
coinciding details. It would be different if we uncovered evidence that
the events were not such as would be expected to produce this Winston
Churchill and surrounding facts, and so it would be different if we
uncovered evidence that the history of the earth did not match the
historical changes in life forms, but since, as in the case of
Churchill, we discover all the events in the past coinciding with the
outcome, we are more than permitted to see this as a natural, rather
than an incredibly improbable outcome.
Overman has two other shallow arguments that should be mentioned,
although none of them have anything to do with any statistics that he
calculates (the only figure he produces is naively irrelevant, as noted
above). First, he says "natural selection does not exist in
prebiological molecules" (p. 56). But this is again irrelevant. So long
as a random first replicator is possible, we do not need natural
selection to explain the rise of life, only its subsequent developments.
And as I have noted amply throughout this page, there is no sound
argument against a simple first replicator lying well within the realm
of the possible, and in fact there is positive evidence for such (see my
discussion of the tetrahymena
and other related scientific discoveries). His second argument (p. 76)
is that Ilya Prigogine's work (discussed above) confuses order (such as
in a crystal) with complexity (such as in a printed page). For this he
cites Hubert Yockey's article "Self Organization Origin of Life
Scenarios and Information Theory" (Journal of Theoretical Biology, 91 (1981), p. 20), and his book (pp. 245, 289), both addressed separately above.
But this is, like many of Yockey's reasonings, not relevant to the
question. Random mutation is no respecter of order, and selection is
partial to complexity. Thus, once the first replicator formed,
complexity would be an inevitable outcome of mutation and selection. We
don't know how "complex" the first replicator had to be or could have
been, nor do we know how many were possible.
But both Overman and Yockey's entire reading of Prigogine is wrong.
His paper demonstrates quite conclusively that large numbers of very
long polymers can be naturally produced within the confines of the laws
of thermodynamics, that they will not just be strings of identical amino
acids, and that a form of prebiotic selection occurs, wherein randomly
produced polymers catalyze a certain chemical faster than competing
polymers, and thus the catalysis process "favors" certain catalized
products over others. With several such cycles engaging, complexity can
arise in a "selective" system even without genuine reproduction (instead
of "reproduction," the system he describes uses mass "production" as
the factor upon which natural selection acts).

Mark Ludwig and Guy Cramer

Two additional but inferior claims are cited in the online essay The Crutches of Atheism by Guy Cramer. I will quickly address these. The first is Mark Ludwig's Computer Viruses, Artificial Life and Evolution
(1993) where the odds against the spontaneous assembly of the E. Coli
bacterium are assumed to be equivalent to the odds against the formation
of life, calculated as 1 in 10^3,000,000, or perhaps as "low" as 1 in
10^2,300,000 (p. 274). But no one believes E. Coli is the first organism
or anything like the first organism--it is a highly advanced creature,
the end result of over a billion years of evolution. This is the same
mistake made by Coppedge, and I discuss what is wrong with it above.
This statistic is thus irrelevant. Guy Cramer himself then brings up
the claim that the odds against uniform chirality in the E. Coli genome
are 1 in 10^3,600,000. But this again makes the same mistake as
Coppedge, and many others. Accidental uniform chirality
("homochirality") for an organism as simple as the tetrahymena
would be nowhere near as improbable. Cramer is also assuming that
uniform chirality must necessarily be random, when there are several
possible nonrandom causes. I discuss this issue in detail above.• Cramer also
claims that organic chemist William Bonner "gave this summation on these
odds" against homochirality: "Terrestrial explanations are impotent and
nonviable." Although not a statistic, Cramer claims this view must be
respected because Bonner is "the world's leading homochiral researcher"
but that is claiming too much. Bonner is a leading homochiral researcher. He is far from the one, and in fact Cramer fails to mention that his source for Bonner's quote [6] clearly explains that there are several leading homochiral experts who disagree with Bonner and who have very plausible ideas, including the one man who, if anyone, actually can
claim to be the world's leading origins-of-life chemist, Stanley
Miller. Moreover, Cramer fails to tell his readers that Bonner actually
believes that homochiral molecules can be manufactured naturally, and
that the odds are only against a terrestrial source of such
molecules (thus Bonner's opinion does not really support the creationist
position that life is too improbable to be a natural product).
Summarizing the position and research of Bonner and his supporters,
the source Cramer cites includes an explanation of confirming evidence
that homochiral molecules can be very easily made in a variety of
natural conditions, one being a supernova. From a supernova, homochiral
molecules can then be delivered to other regions of space via impacts
from comets formed from the ejected material. Life could then have
originated on Earth (and maybe has on other worlds) after an impact from
a heavily homochiral comet--in fact, such an impact may have caused the
origin of life by completing all the conditions necessary. Also,
Cramer's source does not report what Bonner thinks of Miller's proposal
that life can begin without homochirality in a pre-DNA system and
develop homochirality as an advantage, and though Bonner says other
proposed causes of homochirality on Earth have not "yielded convincing
conclusions" in his opinion, many scenarios have nevertheless been shown
possible.

Conclusion: One Pervasive Error Plagues Them All

There is still the same, single, fundamental problem with all these
statistical calculations, one that I mention in my review of Foster: no
one knows what the first life was. People like Morowitz can try to
calculate what is, at a minimum, possible, and laboratory experiments,
like that which discovered the powers of tetrahymena (see Addenda C),
can approach a guess, but these guesses still do not count as
knowledge, and it is not sound to claim that simply because we don't
know what it was, therefore we can't assume there was such a simple life
form. And even if we accept such an argument, to go from there to "god"
is essentially a god-of-the-gaps argument. When we did not know how the
bumble-bee flew, was that an adequate ground for positing god as the
answer, or was it instead cause for further scientific investigation
aimed at finding out the natural explanation? All of science is the
result of choosing the latter approach. Once there was a time when
nothing was explained. Since then, everything which has been explained
has been found to have a natural, not a divine, explanation. Although
this does not prove that all future explanations will be of like kind,
it shows that it is not at all unreasonable to expect this--and it is
not a very reliable bet to expect the opposite.
Theories which make the origin of life plausible are hypotheses like any others, awaiting future research--in fact, generating
that research. On the other hand, in the words of Frank Salisbury,
"Special creation or a directed evolution would solve the problem of the
complexity of the gene, but such an idea has little scientific value in
the sense of suggesting experiments." And the experiments suggested by
Salisbury and his colleagues led, in fact, to a simplification of the
very problem that vexed Salisbury in 1969. Science, once again, gets
somewhere. Creationism gets us nowhere. Coppedge suspected in his day
"many evolutionists have avoided such investigations [into the odds
against life forming] because they intuitively recognize that it will
threaten evolutionary doctrine" (p. 234). Yet scientists hardly avoided
the matter at all. Quite to the contrary, while creationists engaged in
no actual research for twenty-five years and contributed nothing to our
understanding of biology, scientists chewed away at the very problems
Salisbury and Coppedge discussed, and solved a great many of them (see
Stuart Kauffman, The Origins of Order: Self-Organization and Selection in Evolution,
1993). That none of them thought to make arbitrary and groundless
guesses for the purpose of calculating a useless statistic is a
testament to their wisdom, just as it is a testament to the ignorance of
those, like Coppedge, who actually do this. We only need consider which
has added to our knowledge to see who is making better use of their
time.[1] Regarding exactly when improbability approaches impossibility, frequently cited is the famous French statistician Emile Borel, Les Probabilites et la Vie (Presses Universitaires de France, 1943) translated into English by Maurice Baudin as Probabilities and Life
(Dover, 1962). On p. 28 of the latter, Borel calculates that, when
examining questions on a cosmic scale, anything with odds worse than 1
in 10^50 can be regarded as impossible, while anything with odds between
10^0 to 10^50 could have happened at least once in the age and expanse
of the cosmos.[1a] This is more than a possibility: it has
recently been proven. Life forms have been cultivated in the lab that
employ new kinds of amino acids apart from the standard 20. Moreover, it
has been known for some time now that human beings actually employ 21
amino acids, not the 20 that are presumed to encompass all known
biology--which in itself proves that the number of amino acids employed
in biology is an adaptive, cumulative outcome of evolution. In fact,
computer simulations have provided evidence that the standard 20 amino
acids in biology were selected for naturally, since they are the most
stable of all amino acids--therefore, life could easily have started
with a much larger repertoire and all other, less stable amino acids
eventually selected out of the biosphere, leaving the twenty we now
know. Some of those twenty may even be new amino acids, not at all
original to the first life but developed through mutation, as is the
case in the special 21st amino acid developed in the human biochemical
system. On all these new facts, cf. Tina Hesman, "Code Breakers:
Scientists are Altering Bacteria in a Most Fundamental Way," Science News, June 3, 2000, pp. 360-2.[1b] This is in fact what most scientists now
believe. But, in fact, it may have begun with an even simpler and
stronger PNA system, cf. Science News, June 3, 2000, p. 363, citing experimental results published in the Proceedings of the National Academy of Sciences,
April 11, 2000. The related claim that cellular structure is required
for life, but is too complex to arise by accident, is wrong on both
counts: life could conceivably, in the right conditions, survive without
a cell wall long enough to evolve one, but in fact we know that cell
walls grow naturally even in space (see "Life's Housing May Come From
Space," Science News, Feb. 3, 2001, p. 68), and experiments since
the 1960's have shown that cell-like structures can develop under
natural conditions on earth, which living organisms would take shelter
in and eventually evolve a more complex control over.[2] See also the dispute between Bonner and Miller
above, but for most of the possible explanations for left-handed
proteins see the following sources: William C. McHarris, "Handedness in
Nature," Analog, Jan. 1986; M.E. Popselov, "Chiral-Selective Radiolysis in a Magnetic Field," Physics Letters A, 220(4-5): 194-200 (1996 Sep 9); W.A. Bonner, "Chirality and Life," Origins of Life and Evolution of the Biosphere,
25(1-3): 175-190 (1995 Jun); J.M. Greenberg et al. "Interstellar Dust,
Chirality, Comets and the Origins of Life - Life from Dead Stars," Journal of Biological Physics,
20(1-4): 61-70 (1994); Tranter, G.E. et al., "Computational Studies of
the Electroweak Origin of Biomolecular Handedness in Natural-Sugars," Proceedings of the Royal Society of London A - Mathematical and Physical Sciences, 436(1898): 603-615 (1992 Mar 9); Mason, S.F., "Prebiotic Sources of Biomolecular Handedness," Chirality, 3(4): 223-226 (1991), S. Chandrasekhar, "Auto-catalysis as the Possible Origin of Biomolecular Chirality," Current Science
70:4, pp. 259-260 (Feb. 25, 1996), Y. He, F. Qi, S. Qi, "Effect of
Chiral Helical Force Field on Molecular Helical Enantiomers and Possible
Origin of Biomolecular Homochirality," Medical Hypotheses, 51:2, pp. 125-128 (August, 1998).[3] Julian Huxley, in Evolution in Action (1953), reports that each individual gene
has a rate of mutation between 1 in 50,00 replications and 1 in several
million, and figures the average to be 1 in 100,000 (p. 49). These are
errors at the gamete level, and thus are multiplied by the rate of birth
of sperm and egg cells, and the number of genes in the genome, both of
which is astronomical. In other words, if we follow this to its
conclusion, mutations are so common that a vast majority of human sperm
and eggs will have mutations. This no doubt explains why despite
constant bombardment of eggs with sperm a pregnancy often still does not
result, and why even then more than 80% of conceptions result in
spontaneous miscarriage. Moreover, since a considerable portion of the
human genome is inactive, many mutations are unlikely to have any
effect, and still other mutations will simply create genetic patterns
that are already found naturally in the human population and thus go
unnoticed. To make matters more complicated, sexual genetic combination
often allows many mutations to become inert under the weight of a
dominant gene on its paired chromosome, with a chance of becoming
noticeable only in later generations. Since no author I have ever seen
has taken into account this astonishing maze of circumstances in
determining a rate of relevant mutation by birth, I cannot come to any
conclusion from this myself--except to say that it cannot be that
unusual, even given the human genome's developed ability to repair
itself.[4] Schroeder says even this is generous, i.e. the
supposition that a point mutation can actually grant as much as a 1%
advantage in reproductive success. Yet it has recently been
experimentally proven that a single gene mutation can have a far greater
influence than that. In one instance involving Monkeyflowers, a single
gene mutation created a 50% reproductive advantage ("Monkeyflowers hint
at evolutionary leaps" Science News October 16, 1999 [156:16] p.
244). Another single gene mutation radically altered flower
pigmentation. So it is quite possible that Schroeder is not being
generous, but conservative.[5] Eden also argues that genetic transfers between
bacteria are so rare that not even one ordered pair could have been
transferred this way in all the history of life (he means by
extra-sexual processes, p. 9). This is based on arbitrary assumptions on
his part, and is proved false by the transfer of entire ordered
chromosomes between bacteria, and by the transfer of entire gene
sequences from a virus into a host's genome, cf. Science News
Nov. 13, 1999 [156:20], p. 311, "as much as 1 percent of human DNA
consists of genetic fossils of viruses that once inserted their genes
into the genomes of human ancestors." And "the recent sequencing of
microbial genomes reveals that horizontal transfers have occurred far
more often than most researchers had appreciated," for instance in one
bacterium as much as 20% of its DNA came from genetic transfers from
other species ("Pass the Genes Please: Gene Swapping Muddles the History
of Microbes," Science News 22 July 2000 [158:4] pp. 60-1). Even
so, such gene transfer is not needed for evolution anyway (it is just
one more among many means of randomly creating new genomes for trial in
the forge of nature), so his point is entirely moot.[6] Jon Cohen's very brief summary of views
presented at a meeting on homochirality, "Getting All Turned Around Over
the Origins of Life," Science 267 (3 March) 1995, pp. 1265-6.Note regarding Ian Musgrave's Article, "Lies, Damned Lies, Statistics, and Probability of Abiogenesis Calculations" (www.talkorigins.org/faqs/abioprob/abioprob.html).
This essay was published in 1998 and contains some insignificant
mathematical errors and perhaps some misleading language, problems
Musgrave assures me he will correct in a forthcoming revision. For now,
please note:

In general, Musgrave only aims to demonstrate that creationist
arguments fail to take into account plausible hypotheses that would make
natural biogenesis on earth highly probable. He does not mean to assert
that this high probability is a proven scientific fact--but rather that
creationists have not ruled out the relevant hypotheses, nor have they
demonstrated their own hypotheses, and consequently their probability
arguments are groundless.

Musgrave's mathematical errors are: (a) where he says "you would
have enough molecules to generate our particular replicator in a few
tens of years" the correct math produces 200 million years, not "a few
tens of years," but since the available window for the formation of life
was around 400 million years, his conclusion remains unaffected; (b)
Musgrave claims "on the early Earth it is likely that the ocean had a
volume of 1 x 10^24 litres," when the actual figure is more like 10^15,
but even when we do the math correctly, we get a total time interval of
2-4 years (which means there would be a 50% chance to get his target
after about 1 year), so again his conclusion remains correct (but still
explicitly hypothetical); and (c) Musgrave says "the Ghadiri ligase
could be generated in one week, and any cytochrome C sequence could be
generated in a bit over a million years (along with about half of all
possible 101 peptide sequences...)," but this is incorrect: the Ghadiri
sequence would take a few years given the correct math, not a week, and
no 101-peptide sequence is possible by random assembly in even a
trillion years, unless (perhaps) Musgrave takes into account all
sufficiently earth-like worlds in the history of the universe to
date--which he does not do. However, again, his conclusions are
unaffected by removing or correcting these statements.

Subscribe To

Followers

Fair Use Notice

This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We make such material available in an effort to advance awareness and understanding of issues relating to civil rights, religious tolerance, economics, individual rights, international affairs, liberty, science & technology, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes.