THE EVOLUTION LIST

THE EVOLUTION LIST is a forum for commentary, discussion, essays, news, and reviews that illuminate the theory of evolution and its implications in original and insightful ways. Unless otherwise noted, all materials may be quoted or re-published in full, with attribution to the author and THE EVOLUTION LIST. The views expressed herein do not necessarily reflect those of Cornell University, its administration, faculty, students, or staff.

Wednesday, January 28, 2009

TIDAC: Identity, Analogy, and Logical Argument in Science

The descriptions and analysis of the functions of analogy in logical reasoning that I am about to describe are, in my opinion, not yet complete. I have been working on them for several years (actually, about 25 years all told), but I have yet to be completely satisfied with them. I am hoping, therefore, that by making them public here (and eventually elsewhere) that they can be clarified to everyone’s satisfaction.

This version of this analysis is a revised and updated version of an earlier post at this blog.

SECTION ONE: ON ANALOGY

To begin with, let us define an analogy as “a similarity between separate (but perhaps related) objects and/or processes”. As we will see, this definition may require refinement (and may ultimately rest on premises that cannot be proven - that is, axioms - rather than formal proof). But for now, let it be this:

COMMENTARY 1.0: This is essentially a statement of the logical validity of tautology (from the Greek tó autos meaning “the same” and logos, meaning “word” or “information”. As Ayn Rand (and, according to her, Aristotle) asserted:

AXIOM 1.0: A = A

From this essentially unprovable axiom, the following corrolary may be derived:

CORROLARY 1.1: All analogies that are not identities are necessarily imperfect.

CORROLARY 2.2: Since only tautologies are prima facie "true", this implies that all analogical statements (except tautologies) are false to some degree. This leads us to:

AXIOM 3.0: All imperfect analogies are false to some degree.

AXIOM 3.0: A ≠ notA

CORROLARY 3.1: Since all non-tautological analogies are false to some degree, then all arguments based on non-tautological analogies are also false to the same degree.

COMMENTARY 2.0: The validity of all logical arguments that are not based on tautologies are matters of degree, with some arguments being based on less false analogies than others.

CONCLUSION 1: As we will see in the next sections, all forms of logical argument (i.e. transduction, induction, deduction, and abduction) necessarily rely upon non-tautological analogies. Therefore, to summarize:

All forms of logical argument (except for tautologies) are false to some degree.

Our task, therefore, is not to determine if non-tautological logical arguments are true or false, but rather to determine the degree to which they are false (and therefore the degree to which they are also true), and to then use this determination as the basis for establishing confidence in the validity of our conclusions.

SECTION TWO: ON VALIDITY, CONFIDENCE, AND LOGICAL ARGUMENT

Based on the foregoing, let us define validity as “the degree to which a logical statement is based upon false analogies.” Therefore, the closer an analogy is to a tautology, the more valid that analogy is.

DEFINITION 2.0: Validity = The degree to which a logical statement is based upon false analogies.

COMMENTARY: Given the foregoing, it should be clear at this point that (with the exception of tautologies):

There is no such thing as absolute truth; there is only degrees of validity.

In biology, it is traditional to determine the validity of an hypothesis by calculating confidence levels using statistical analyses. According to these analyses, if a hypothesis is supported by at least 95% of the data (that is, if the similarity between the observed data and the values predicted by the hypothesis being tested is at least 95%), then the hypothesis is considered to be valid. In the context of the definitions, axiom, and corrolaries developed in the previous section, this means that valid hypotheses in biology may be thought of as being at least 95% tautological (and therefore less than 5% false).

DEFINITION 2.1: Confidence = The degree to which an observed phenomenon conforms to (i.e. is similar to) a hypothetical prediction of that phenomenon.

This means that, in biology:

Validity (i.e. truth) is, by definition, a matter of degree.

Following long tradition, an argument (from the Latin argueré, meaning “to make clear”) is considered to be a statement in which a premise (or premises, if more than one, from the Latin prae, meaning “before” and mitteré, meaning “to place”) is related to a conclusion (i.e. the end of the argument). There are four kinds of argument, based on the means by which a premise (or premises) are related to a conclusion: transduction, induction, deduction, and abduction, which will be considered in order in the following sections.

DEFINITION 2.2: Argument = A statement of a relationship between a premise (or premises) and a conclusion.

Given the foregoing, the simplest possible argument is a statement of a tautology, as in A = A. Unlike all other arguments, this statement is true by definition (i.e. on the basis of AXIOM 1.0). All other arguments are only true by matter of degree, as established above.

SECTION THREE: ON TRANSDUCTION

The simplest (and least effective) form of logical argument is argument by analogy. The Swiss child psychologist Jean Piaget called this form of reasoning transduction (from the Latin trans, meaning “across” and duceré. meaning “to lead”), and showed that it is the first and simplest form of logical analysis exhibited by young children. We may define transduction as follows:

DEFINITION 3.0: Transduction = Argument by analogy alone (i.e. by simple similarity between a premise and a conclusion).

A tautology is the simplest transductive argument, and is the only one that is “true by definition.” As established above, all other arguments are “true only by matter of degree.” But to what degree? How many examples of a particular premise are necessary to establish some degree of confidence? That is, how confident can we be of a conclusion, given the number of supporting premises?

As the discussion of confidence in Section 2 states, in biology at least 95% of the observations that we make when testing a prediction that flows from an hypothesis must be similar to those predicted by the hypothesis. This, in turn, implies that there must be repeated examples of observations such that the 95% confidence level can be reached.

However, in a transductive argument, all that is usually stated is that a single object or process is similar to another object or process. That is, the basic form of a transductive argument is:

Ai => Aa

where:

Ai is an individual object or process

and

Aa is an analogous (i.e. similar, but identical, and therefore non-tautological) object or process

Since there is only a single example in the premise in such an argument, to state that there is any degree of confidence in the conclusion is very problematic (since it is nonsensical to state that a single example constitutes 95% of anything).

In science, this kind of reasoning is usually referred to as “anecdotal evidence,” and is considered to be invalid for the support of any kind of generalization. For this reason, arguments by analogy are generally not considered valid in science. As we will see, however, they are central to all other forms of argument, but there must be some additional content to such arguments for them to be considered generally valid.

EXAMPLE 3.0: To use an example that can be extended to all four types of logical argument, consider a green apple. Imagine that you have never tasted a green apple before. You do so, and observe that it is sour. What can you conclude at this point?

The only thing that you can conclude as the result of this single observation is that the individual apple that you have tasted is sour. In the formalism introduced above:

Ag => As

where:

Ag = green apple

and

As = sour apple

While this statement is valid for the particular case noted, it cannot be generalized to all green apples (on the basis of a single observation). Another way of saying this is that the validity of generalizing from a single case to an entire category that includes that case is extremely low; so low that it can be considered to be invalid for most intents and purposes.

SECTION FOUR: ON INDUCTION

A more complex form of logical argument is argument by induction. According to the Columbia Encyclopedia, induction (from the Latin in, meaning “into” and duceré, meaning “to lead”) is a form of argument in which multiple premises provide grounds for a conclusion, but do not necessitate it. Induction is contrasted with deduction, in which true premises do necessitate a conclusion.

An important form of induction is the process of reasoning from the particular to the general. The English philosopher and scientist Francis Bacon in his Novum Organum (1620) elucidated the first formal theory of inductive logic, which he proposed as a logic of scientific discovery, as opposed to deductive logic, the logic of argumentation. the Scottish philosopher David Hume has influenced 20th-century philosophers of science who have focused on the question of how to assess the strength of different kinds of inductive argument (see Nelson Goodman and Karl Popper).

We may therefore define induction as follows:

DEFINITION 4.0: Induction = Argument from individual observations to a generalization that applies to all (or most) of the individual observations.

EXAMPLE 4.0: You taste one green apple; it is sour. You taste another green apple; it is also sour. You taste yet another green apple; once again, it is sour. You continue tasting green apples until some relatively arbitrary point (which can be stated in formal terms, but which is unnecessary for the current analysis), you formulate a generalization; “(all) green apples are sour.”

In symbolic terms:

A1 + A2 + A3 + …An => As

where:

A1 + A2 + A3 + …An = individual cases of sour green apples

and

As = green apples are sour

As we have already noted, the number of similar observations (i.e. An in the formula, above) has an effect on the validity of any conclusion drawn on the basis of those observations. In general, enough observations must be made that a confidence level of 95% can be reached, either in accepting or rejecting the hypothesis upon which the conclusion is based. In practical terms, conclusions formulated on the basis of induction have a degree of validity that is directly related to the number of similar observations; the more similar observations one makes, the greater the validity of one’s conclusions.

IMPLICATION 4.0: Conclusions reached on the basis of induction are necessarily tentative and depend for their validity on the number of similar observations that support such conclusions. In other words:

Inductive reasoning cannot reveal absolute truth, as it is necessarily limited only to degrees of validity.

It is important to note that, although transduction alone is invalid as a basis for logical argument, transduction is nevertheless an absolutely essential part of induction. This is because, before one can formulate a generalization about multiple individual observations, it is necessary that one be able to relate those individual observations to each other. The only way that this can be done is via transduction (i.e. by analogy, or similarity, between the individual cases).

In the example of green apples, before one can conclude that “(all) green apples are sour” one must first conclude that “this green apple and that green apple (and all those other green apples) are similar.” Since transductive arguments are relatively weak (for the reasons discussed above), this seems to present an unresolvable paradox: no matter how many similar repetitions of a particular observation, each repetition depends for its overall validity on a transductive argument that it is “similar” to all other repetitions.

This could be called the “nominalist paradox,” in honor of the philosophical tradition founded by the English cleric and philosopher William of Ockham, of “Ockham’s razor” fame. On the face of it, there seems to be no resolution for this paradox. However, I believe that a solution is entailed by the logic of induction itself. As the number of “similar” repetitions of an observation accumulate, the very fact that there are a significant number of such repetitions provides indirect support for the assertion that the repetitions are necessarily (rather than accidentally) “similar.” That is, there is some “law-like” property that is causing the repetitions to be similar to each other, rather than such similarities being the result of random accident.

SECTION FIVE: ON DEDUCTION

A much older form of logical argument than induction is argument by deduction. According to the Columbia Encyclopedia, deduction (from the Latin de, meaning “out of” and duceré, meaning “to lead”) is a form of argument in which individual cases are derived from (and validated by) a generalization that subsumes all such cases. Unlike inductive argument, in which no amount of individual cases can prove a generalization based upon them to be “absolutely true,” the conclusion of a deductive inference is necessitated by the premises. That is, the conclusions (i.e. the individual cases) can’t be false if the premise (i.e. the generalization) is true, provided that they follow logically from it.

Deduction may be contrasted with induction, in which the premises suggest, but do not necessitate a conclusion. The ancient Greek philosopher Aristotle first laid out a systematic analysis of deductive argumentation in the Organon. As noted above, Francis Bacon elucidated the formal theory of inductive logic, which he proposed as the logic of scientific discovery.

Both processes, however, are used constantly in scientific research. By observation of events (i.e. induction) and from principles already known (i.e. deduction), new hypotheses are formulated; the hypotheses are tested by applications; as the results of the tests satisfy the conditions of the hypotheses, laws are arrived at (i.e. by induction again); from these laws future results may be determined by deduction.

We may therefore define deduction as follows:

DEFINITION 5.0: Deduction = Argument from a generalization to an individual case, and which applies to all such individual cases.

EXAMPLE 5.0: You assume that all green apples are sour. You are confronted with a particular green apple. You conclude that, since this is a green apple and green apples are sour, then “this green apple is sour.”

In symbolic terms:

As => Ai

where:

As = all apples are sour

Ai = any individual case of a green apple

As noted above, the conclusions of deductive arguments are necessarily true if the premise (i.e. the generalization) is true. However, it is not clear how such generalizations are themselves validated. In the scientific tradition, the only valid source of such generalizations is induction, and so (contrary to the Aristotelian tradition), deductive arguments are no more valid than the inductive arguments by which their major premises are validated.

IMPLICATION 5.0: Conclusions reached on the basis of deduction are, like conclusions reached on the basis of induction, necessarily tentative and depend for their validity on the number of similar observations upon which their major premises are based. In other words:

Deductive reasoning, like inductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which its major premise is based.

Hence, despite the fact that induction and deduction “argue in opposite directions,” we come to the conclusion that, in terms of natural science, the validity of both is ultimately dependent upon the number and degree of similarity of the observations that are used to infer generalizations. Therefore, unlike the case in purely formal logic (in which the validity of inductive inferences is always conditional, whereas the validity of deductive inferences is not), there is an underlying unity in the source of validity in the natural sciences:

All arguments in the natural sciences are validated by inductive inference.

SECTION SIX: ON ABDUCTION

A somewhat newer form of logical argument is argument by abduction. According to the Columbia Encyclopedia, abduction (from the Latin ab, meaning “away” and duceré, meaning “to lead”) is the process of reasoning from individual cases to the best explanation for those cases. In other words, it is a reasoning process that starts from a set of facts and derives their most likely explanation from an already validated generalization that explains them. In simple terms, the new observation(s) is/are "abducted" into the already existing generalization.

The American philosopher Charles Sanders Peirce (last name pronounced like "purse") introduced the concept of abduction into modern logic. In his works before 1900, he generally used the term abduction to mean “the use of a known rule to explain an observation,” e.g., “if it rains, the grass is wet” is a known rule used to explain why the grass is wet:

Known Rule: “If it rains, the grass is wet.”

Observation: “The grass is wet.”

Conclusion: “The grass is wet because it has rained.”

Peirce later used the term abduction to mean “creating new rules to explain new observations,” emphasizing that abduction is the only logical process that actually creates new knowledge. He described the process of science as a combination of abduction, deduction and implication, stressing that new knowledge is only created by abduction.

This is contrary to the common use of abduction in the social sciences and in artificial intelligence, where Peirce's older meaning is used. Contrary to this usage, Peirce stated in his later writings that the actual process of generating a new rule is not hampered by traditional rules of logic. Rather, he pointed out that humans have an innate ability to correctly do logical inference. Possessing this ability is explained by the evolutionary advantage it gives.

We may therefore define abduction as follows (using Peirce's original formulation):

DEFINITION 6.0: Abduction = Argument that validates a set of individual cases via a an explanation that cites the similarities between the set of individual cases and an already validated generalization.

EXAMPLE 6.0: You have a green fruit, which is not an apple. You already have a tested generalization about green apples that states that green apples are sour. You observe that since the fruit you have in hand is green and resembles a green apple, then (by analogy to the case in green apples) it is probably sour (i.e. it is analogous to green apples, which you have already validated are sour).

In symbolic terms:

(Fg = Ag) + (Ag = As) => Fg = Fs

where:

Fg = a green fruit

Ag = green apple

As = sour green apple

and

Fs = a sour green fruit

In the foregoing example, it is clear why Peirce asserted that abduction is the only way to produce new knowledge (i.e. knowledge that is not strictly derived from existing observations or generalizations). The new generalization (“this new green fruit is sour”) is a new conclusion, derived by analogy to an already existing generalization about green apples. Notice that, once again, the key to formulating an argument by abduction is the inference of an analogy between the green fruit (the taste of which is currently unknown) and green apples (which we already know, by induction, are sour).

IMPLICATION 6.0: Conclusions reached on the basis of abduction are, like conclusions reached on the basis of induction and deduction, are ultimately based on analogy (i.e. transduction). That is, a new generalization is formulated in which an existing analogy is generalized to include a larger set of cases.

Again, since transduction, like induction and deduction, is only validated by repetition of similar cases (see above), abduction is ultimately just as limited as the other forms of argument as the other three:

Abductive reasoning, like inductive and deductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which it premised.

SECTION SEVEN: ON CONSILIENCE

The newest form of logical argument is argument by consilience. According to Wikipedia, consilience (from the Latin con, meaning “with” and saliré, meaning “to jump”: literally "to jump together") is the process of reasoning from several similar generalizations to a generalization that covers them all. In other words, it is a reasoning process that starts from several inductive generalizations and derives a "covering" generalization that is both validated by and strengthens them all.

The English philosopher and scientist William Whewell (pronounced like "hewel") introduced the concept of consilience into the philosophy of science. In his book, The Philosophy of the Inductive Sciences, published in 1840, Whewell defined the term consilience by saying “The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.”

To extend the example for abduction given above, if the grass is wet (and rain is known to make the grass wet), the road is wet (and rain is known to make the road wet), and the car in the driveway is wet (and rain is known to make the car in the driveway wet), then rain can make everything outdoors wet, including objects whose wetness is not yet verified to be the result of rain.

Independent Observation: “The grass is wet.”

Already validated generalization: "Rain makes grass wet."

Independent Observation: “The road is wet.”

Already validated generalization: "Rain makes roads wet."

Independent Observation: “The car in the driveway is wet.”

Already validated generalization: "Rain makes cars in driveways wet."

Conclusion: “Rain makes everything outdoors wet.”

One can immediately generate an application of this new generalization to new observations:

New observation: "The picnic table in the back yard is wet."

New generalization: “Rain makes everything outdoors wet.”

Conclusion: "The picnic table in the back yard is wet because it has rained."

We may therefore define consilience as follows:

DEFINITION 7.0: Consilience = Argument that validates a new generalization about a set of already validated generalizations, based on similarities between the set of already validated generalizations.

EXAMPLE 7.0: You have a green peach, which when you taste it, is sour. You already have a generalization about green apples that states that green apples are sour and a generalization about green oranges that states that green oranges are sour. You observe that since the peach you have in hand is green and sour, then all green fruits are probably sour. You may then apply this new generalization to all new green fruits whose taste is currently unknown.

In symbolic terms:

(Ag = Sa) + (Og = Os) + (Pg = Ps) => Fg = Fs

where:

Ag = green apples

Sa = sour apples

Og = green oranges

Os = sour oranges

Pg = green peaches

Ps = sour peaches

Fg = green fruit

Fs = sour fruit

Given the foregoing example, it should be clear that consilience, like abduction (according to Peirce) is another way to produce new knowledge. The new generalization (“all green fruits are sour”) is a new conclusion, derived from (but not strictly reducible to) its premises. In essence, inferences based on consilience are "meta-inferences", in that they involve the formulation of new generalizations based on already existing generalizations.

IMPLICATION 7.0: Conclusions reached on the basis of consilience, like conclusions reached on the basis of induction, deduction, and abduction, are ultimately based on analogy (i.e. transduction). That is, a new generalization is formulated in which existing generalizations are generalized to include all of them, and can then be applied to new, similar cases.

Again, since consilience, like induction, deduction, and abduction, is only validated by repetition of similar cases, consilience is ultimately just as limited as the other forms of argument as the other three:

Consilient reasoning, like inductive, deductive, and abductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which it premised.

However, there is an increasing degree of confidence involved in the five forms of logical argument described above. Specifically, simple transduction produces the smallest degree of confidence, induction somewhat more (depending on the number of individual cases used to validate a generalization), deduction more so (since generalizations are ultimately based on induction), abduction even more (because a new set of observations is related to an already existing generalization, validated by induction), and consilience most of all (because new generalizations are formulated by induction from sets of already validated generalizations, themselves validated by induction).

CONCLUSIONS:

• Transduction relates a single premise to a single conclusion, and is therefore the weakest form of logical validation.

• Induction validates generalizations only via repetition of similar cases, the validity of which is strengthened by repeated transduction of similar cases.

• Deduction validates individual cases based on generalizations, but is limited by the induction required to formulate such generalizations and by the transduction necessary to relate individual cases to each other and to the generalizations within which they are subsumed.

• Abduction validates new generalizations via analogy between the new generalization and an already validated generalization; however, it too is limited by the formal limitations of transduction, in this case in the formulation of new generalizations.

• Consilience validates a new generalization by showing via analogy that several already validated generalizations together validate the new generalization; once again, consilience is limited by the formal limitations of transduction, in this case in the validation of new generalizations via inferred analogies between existing generalizations.

• Taken together, these five forms of logical reasoning (call them "TIDAC" for short) represent five different but related means of validating statements, listed in order of increasing confidence.

• The validity of all forms of argument are therefore ultimately limited by the same thing: the logical limitations of transduction (i.e. argument by analogy).

• Therefore, there is (and can be) no ultimate certainty in any description or analysis of nature insofar as such descriptions or analyses are based on transduction, induction, deduction, abduction, and/or consilience.

• All we have (and can ever have) is relative degrees of confidence, based on repeated observations of similar objects and processes.

• Therefore, we can be most confident about those generalizations for which we have the most evidence.

• Based on the foregoing analysis, generalizations formulated via simple analogy (transduction) are the weakest and generalizations formulated via consilience are the strongest.

Monday, January 26, 2009

The IDEA Dodo Tries to Fly...

Well, that's nice. But if you follow the link, you will find that according to their own report, the IDEA network currently includes

"...about [sic-1] a dozen IDEA Club chapters that are active or in-formation.[sic-2]"

Interesting; back in December 2005, Dr. William Dembski wrote that

"...there are thirty such centers [sic-3] at American colleges and universities..."

From 30 "centers" (i.e. "clubs") in December 2005 to "about a dozen that are active or in-formation" in January 2009. To me (and admittedly I'm not a mathematician...more's the pity), that indicates a decline of at least 60% since 2005. And that assumes that all of the clubs included in the "dozen that are active or in-formation" now actually exist. That is, they meet now and then, and a few people show up for their meetings.

Of course, we have no way of empirically verifying whether any of the "about a dozen" clubs actually exist or not. However, what anyone with a web connection can empirically verify is that the links to "active" IDEA clubs posted at the national IDEA Center have not changed. They are all either dead (i.e. they return a 404: File Not Found message) or they lead back to old press releases from the national IDEA Center.

A spokesperson for the national IDEA Center (could it be Casey himself?) claims that they are diligently updating these links and will soon post a new map and list with new hot links to new active IDEA clubs. Well, could be; they will be able to do this if such clubs actually exist and are meeting now and then.

So, I propose a new empirical investigation (that's what science is all about, right?). Let's all return periodically to the page at the national IDEA Club that lists the links to "active" IDEA clubs and see if they have been updated. If they have, and they include information on recent activities at those clubs, then we can conclude that the IDEA club movement really isn't dead, it's just restin' on account of bein' tired and shagged out after a long squawk (like the Norwegian blue; beau'iful plumage...).

But, if the links don't get updated, or they are but there are no new activities listed, then we can conclude that their self-reports of their non-demise will have been greatly exaggerated.

[sic-1] "...about a dozen..." Odd, most people don't have trouble counting up to twelve. Is it "a dozen" or eleven, or six, or one, or what? If anybody should know, it should be the people in charge, right?

[sic-2] "in-formation" Would that have anything to do with Dr. Dembski's soi dissant "Law of Conservation of Information"? Or does it mean that a couple of people have been thinking about getting together to talk about ID sometime? Just curious...

[sic-3] Dr. Dembski's "ID centers", would those be the little group of two or three ID supporters huddled in a corner of the student union cafeteria, talking about the "evilutionists" and ending their "center's" activities with a prayer? Sounds like a major "ID research center" to me...

Horizontal gene transfer (especially as the result of viral transduction) has been known to occur for almost half a century. In my undergraduate genetics course at Cornell (which I took in the spring of 1972) we did a lab in which we used lambda bacteriophage to transfer genetic material from one bacterial colony to another. Ergo, none of the mechanisms of HGT described in the article in New Scientist are all that new.

Indeed, I have listed at least six mechanisms of HGT in my blogpost on the “engines of variation” located here. In that list, they are numbers 28, 29, 33, 36, 40, and 41. Most of these HGT mechanisms have been known for decades and are among the best understood mechanisms of increasing both genetic and phenotypic variation.

What is relatively new is the application of the information gained about HGT to phylogenetic reconstruction. HGT is the rule among bacteria, and apparently occurs fairly frequently among eukaryotes as well. Evolutionary biologists, and especially phylogeneticists and systematists have been using HGT data for phylogenetic reconstruction for over a decade, even among eukaryotes. So, once again this is not new.

(1) the New Scientist article is pointing out that "Darwinism" is a bankrupt theory, and

(2) that HGT is more easily explained as part of "intelligent design theory" (ID).

Does the increasing recognition of HGT and its use in phylogenetic reconstruction mean that the current theory of evolution is invalid, or that ID can explain these phenomena better? On the contrary, the more we learn about HGT the more it seems to be even more random and undirected than vertical gene transfer (i.e. genetic recombination and heredity via reproduction). To be specific, the overwhelming majority of identified HGTs are of non-coding DNA sequences that have no detectable effect on the phenotypes of the organisms in which it has occurred.

That is, almost all of the DNA sequences that have been unambiguously shown to be the result of HGT are sequences that do not code for proteins nor participate in the regulation of coding sequences. Rather, they are sequences that have “gone along for the ride”, especially as the result of RNA retroviral HGT. Such sequences are so common that they are routinely used to construct and modify genetic phylogenies, as well as to determine genetic homologies.

The vast majority of HGTs are essentially neutral genetic mutations, as first described by Motoo Kimura in his neutral theory of molecular evolution. As such, they produce an immense amount of genetic variation without producing a corresponding amount of phenotypic variation. Furthermore, when such phenotypic variation does occur, it is more often deleterious than beneficial (usually mildly deleterious, as pointed out by Tomoko Ohta in her nearly neutral theory of molecular evolution). Only very rarely are such HGTs beneficial, and then only in relatively restricted ecological and evolutionary settings.

But neutral or slightly deleterious genetic changes (such as those produced by the vast majority of HGTs) are exactly the opposite of what one would expect to see as the work of an “intelligent designer”. Such an entity would (as several of the commentators in this thread have suggested) tailor HGTs to produce adaptive (i.e. beneficial) changes in the phenotypes of the recipients of its HGTs. Either that, or the “intelligent designer” doesn’t “tailor” its HGTs at all, but rather produces them randomly, rather like a dealer in a card game. But in that case, the actions of a soi dissant “intelligent designer” would be indistinguishable from Darwinian evolution, and including any reference to its actions (and/or inferring its existence) would be unnecessary (and would therefore violate Occam’s razor).

One last point: although the vast majority of HGTs produce either no phenotypic effect or slightly deleterious phenotypic effects, a relatively small number produce phenotypic effects that are correlated with increased survival and/or reproductive success. Unlike the vast majority of HGTs, these beneficial HGTs rapidly proliferate in the populations in which they arise, in exactly the way Darwin proposed in 1859. That is, they are preserved and passed on (while deleterious HGTs are eliminated), and thereby become more common over time among the populations in which they occur.

Sunday, January 18, 2009

Martian Rocks and Intelligent Design

Take a good, long look at the photograph at the top of this post (it's from NASA). Does anything about it strike you as odd? Go ahead, I'll wait...

For example, do the rocks in the photograph appear to be simply "randomly" scattered about? How about size; are there patterns in the distribution of the different sizes of rocks in the photo? And how about placement - are any of the rocks in lines, or do they show similar orientation of edges, do any of them have a coating of dust or sand on them, and are any of them stacked on top of each other (or even overlapping)?

Hmm...the more you look at this picture, the less it looks like a random and unrelated collection of objects (that is, rocks). Indeed, it looks as if someone (perhaps Someone "intelligent" with a lot of time on His hands) spent no small amount of time arranging them (after all, such arrangements apparently cover a significant fraction of the surface of the planet Mars).

It's observations like these that lead some "intelligent design theorists" (IDTs) to infer the existence and active interference in natural processes of an "intelligent designer". A significant subset of IDTs go on to infer that this "intelligent designer" is the God of the Judeo-Christian-Muslim-Mormon faith(s). In so doing, they are following the lead of the founder of the neo-Palean "intelligent design and natural theology" movement, the Anglican minister Rev. William Paley. True, what we see in the photograph is rocks, not Rev. Paley's pocketwatch, and that's the windswept plains of Mars, not a windswept heath on Earth, but "design is design" and all design points to the existence of a "designer", right?

Furthermore, many IDTs use a very familiar mode of argument in asserting the existence of "intelligent design". We could call this mode of argument the "duck" argument, as in the old saw "if it looks like a duck, flies like a duck, and quacks like a duck, it's a duck".

What's the operative word here? The word like, of course, which is the tipoff that what is being marshaled is an "argument by analogy".

I have already written about the weaknesses of arguments by analogy (see here, here, and here). Most recently, I pointed out in a critique of a recent blogpost by Dr. Steven Fuller, that arguments by analogy are extremely weak: they lack almost all logical force. Dr. Fuller replied that what IDTs (including himself) use are not, in fact, arguments by analogy. Instead, he argued that what they (and he) were pointing out were "partial identities". That is, a pattern of rocks like the one in the photograph is "partially identical" to, say, a flagstone patio set in place by a "designer" having access to large amounts of dust and sand, but only a few, small rocks.

What, precisely, does the phrase "partial identity" mean? Is it something like being "partially pregnant" or "partially dead"? Or does it mean "only partly identical"? I thought that "identical" meant "identical". That is:

Two things that are identical are the very same in each and every possible way.

Isn't a thing that is "partially identical" to some other thing also "like" that thing? Seems so to me. Indeed, the phrases "partially identical"and "partial identity" seem to me to be oxymorons, plain and simple.

To be as precise as possible:

"Partial identity" is identical to "analogy"

Ergo, Dr. Fuller apparently agrees with me that ID arguments are essentially "arguments by analogy" and therefore have virtually no logical force.

But, to get back to the curious behavior of the rocks on Mars...what's that, you say? The "behavior" of the rocks? Can it be that the rocks are "behaving"?

Indeed, they are. After long and patient analysis, it has become clear that the rocks on Mars (at least the ones in the size range shown in the photograph) "behave". To be specific, they move around on the dusty/sandy surface of the Martian plains.

Furthermore, their movement is not random. On the contrary, there is a very precise pattern to it. The rocks shown in the photograph actually move against the wind, and away from each other. The latter pattern of movement is why they appear to be non-randomly placed on the surface of the Martian plain. Furthermore, they move apart at a rate that is apparently related to their size. Small rocks move apart further and faster than large rocks (very large rocks apparently don't move much at all).

In the parlance of "intelligent design theory", something that acts non-randomly in such a way as to produce non-random patterns of activity is an "agent". Furthermore, according to IDTs, agents are "intelligent" by definition. If it moves like an agent, arranges itself like an agent, and produces patterns that are "partially identical"/analogous to patterns produced by an agent, it's an agent.

Ergo, the rocks in the photograph are either agents, or have been arranged by agents.

Wind removes loose sand in front of the rocks, creating pits there and depositing that sand behind the rocks, creating mounds. The rocks then roll forward into the pits, moving into the wind. As long as the wind continues to blow, the process is repeated and the rocks move forward.

The rocks protect the tiny sand mounds from wind erosion. Those piles of sand, in turn, keep the rocks from being pushed downwind and from bunching up with one another....

The process is nearly the same with a cluster of rocks. However, with a cluster of rocks, those in the front of the group shield their counterparts in the middle or on the edges from the wind...

Because the middle and outer rocks are not directly hit by the wind, the wind creates pits to the sides of those rocks. And so, instead of rolling forward, the rocks roll to the side, not directly into the wind, and the cluster begins to spread out.

In other words, the pattern of rocks shown in the photograph is the result of purely natural forces and the explanation presented above is a "naturalistic" explanation.

There is, of course, an essentially infinite number of imaginable explanations for the arrangement of the rocks in the photo. They could have been arranged by an invisible "agent" who prefers rocks to be "organized". They could have been arranged during the creation of Mars (which, of course, happened on 23 October 4004 BC, along with the creation of all of the other planets, asteroids, comets, planetismals, bolides, etc.). They could have been placed by little green men or hexapedal strongly-thewed Barsoomians, taking a break between sword fights. The list of possibilities is quite literally endless.

However, the scientific consensus is that the "naturalistic" explanation in the block quote above is most consistent with observations and with an assumption that natural processes alone are sufficient to explain them. "Agents" may be involved, and so may Barsoomians, but neither are necessary to explain the arrangement and behavior of the Martian rocks, and so they are not included in a scientific explanation of such arrangement and behavior.

The same is the case for biological organisms and the explanation for their existence: the theory of evolution by natural selection. All of the explanations listed above, including not only wind erosion and "intelligent rock placement", but also tiddly-winking six-armed green warriors (and obsessive-compulsive demiurges), are consistent with the pattern shown in the photograph. However, only the scientific explanation contained in the block quote is also consistent with the universal assumption underlying all of the natural sciences: that only natural forces by invoked to explain observed patterns in natural objects and processes, until such forces are shown to be insufficient to explain such things.

It should also go without saying that, if one is in favor of explaining the arrangement and behavior of Martian rocks in the incredibly limited environment of science classes in the public schools, rather than going into all of the imaginable explanations (including Tars Tarkas and Jaweh Elohim), one should stick to the explanation(s) worked out by practicing professional scientists, and confine the other explanations to classes intended for the non-empirical speculations of amateur philosophers and theologians (or am I being redundantly redundant?)

P.S. To Dr. Fuller: contrary to your aspersion, I am not a "closet theistic evolutionist" — like Newton, "I make no hypotheses!" Unlike Newton, I am an anarchist Heinlein-libertarian Zen Quaker evolutionary psychologist who prefers not to be labeled.

Monday, January 12, 2009

Ground Rules and Moderation Policy

PURPOSE: THE EVOLUTION LIST is a forum for the dissemination and discussion of ideas and information about the theory of evolution in all its dimensions, including its implications for philosophy, religion, and world views.

DISCLAIMER: Unless otherwise noted, all materials may be quoted or re-published in full, with attribution to the author and THE EVOLUTION LIST. The views expressed herein do not necessarily reflect those of Cornell University, its administration, faculty, students, or staff.

FORMAT: Although reading THE EVOLUTION LIST is open to everyone, commenting on posts to this blog is entirely moderated. That is, every comment will first be forwarded to the moderator, who will (after due consideration, and working within the constraints of time and work load) decide whether to allow it to be posted to the "Comments" section of the blog.

GROUND RULES: The founder/moderator of this blog is a great admirer of the traditional values of the academy: collegiality, intellectual freedom, personal responsibility, and respect for others. Therefore, commenting on the posts to this blog has several rules, which will be strictly enforced by the moderator:

• Ad hominem attacks, blasphemy, profanity, rudeness, and vulgarity will not be tolerated (although heresy will always be encouraged). However, vigorous attacks against a member's position are expected and those who cannot handle such should think twice before they post a comment.

• Long-running debates that are of interest only to a small number of individuals should be taken elsewhere, preferably via private email (i.e. if the moderator gets tired of reading posts concerning the population density [N] of terpsichorean demigods inhabiting ferrous microalpine environments, the posters will be strongly encouraged to settle it outside).

• Pseudonyms are tolerated but real names are preferred. However, if the moderator suspects that someone is posting under multiple aliases or pretending to be someone else (i.e. "sock puppeting"), they will be permanently banned from the blog.

• Mutual respect and sensitivity towards those with opposing views is essential. In particular, comments containing what the moderator feels is "creation-bashing" by evolutionists or "evolution-bashing" by creationists, will not be approved for posting.

FURTHERMORE: Both statements of fact and statements of opinion are welcomed, with the following provisos:

• Statements of opinion should be clearly indicated as such (perhaps with “IMO” in parenthesis).

• Both statements of fact and statements of opinion may be challenged by anyone, so long as the challenge takes place within a reasonable length of time (and please remember, time online passes much more swiftly than time in the real world; three days is a virtual eternity).

• If a statement of fact is challenged, the person challenged should make a good faith effort to either provide supporting evidence or make a logical argument as to why such supporting evidence is unnecessary.

• No statement may be challenged or attacked by ad hominem arguments or by changing the topic of the thread (i.e. "hijacking"). In particular, anyone directly or indirectly referring to another commentator as either a "liar" or "having lied" (including semantic equivalents, such as "dissembling" or "mendacity") may result in the perpetrator of such an accusation being banned from further participation.

• Rather than accusing a commentator of lying when they have made a particular assertion, you should post a rebuttal that documents (with references and pertinent links) that there is evidence that the assertion is false and/or misleading.

• Speculation about motives, either directly or indirectly, by anyone commenting on any topic is never allowed and will not be allowed to appear in the "Comments", as this clearly constitutes hijacking the discussion by changing the topic (unless the post itself began as a discussion of motives). If you want to talk about motives, ask the moderator to start a thread to that effect (and if people can't remain civil in the comments on that post, the moderator reserves the right to delete it).

• Decisions about moderation are entirely and exclusively the prerogative of the moderator. If you don't want to abide by these rules, please don't waste my time (and yours) by attempting to post or comment here.

• Please be aware that any infraction of these rules will result in limitation or rescinding of your commenting privileges. A brief untoward statement in a long and otherwise reasonable comment fully justifies its rejection and may result in the rejection of all your future comments, reasonable or not (so keep backup copies in case you have to try again with a more civil version).

• If a post or comment has been rejected by the moderator, please don't repeatedly try to repost them. They won't ever be allowed, it wastes bandwidth and the moderator's time, and simply reinforces the decision on the part of the moderator to reject the post (and maybe ban you permanently).

• Appeals of moderation or complaints about the behavior of your fellow commentators are not appropriate on any thread and, even in cases where they are not ad hominem attacks, they still qualify as hijacking the discussion. Any such appeal or complaint should instead be emailed to the moderator, who will eventually rule one way or the other.

Wednesday, January 07, 2009

Natural Theology, Theodicy, and The Name of the Rose

"Before, we used to look to heaven, deigning only a frowning glance at the mire of matter; now we look at the earth, and we believe in the heavens because of earthly testimony."- Jorgé of Burgos, The Name of the Rose, by Umberto Eco (William Weaver, translator)

I read his first post on the subject with some interest, as I have just finished re-reading (for the fifth time) Umberto Eco's novel, The Name of the Rose. When I was a kid, it was inconceivable to me that a person could re-read a book. That was like seeing a movie over again; it just never happened. But now I often re-read books, and any movie or television show can be viewed as many times as one can possibly stand it.

One of the reasons I re-read books is that I've found that I often discover new things in the book on re-reading. What I had never noticed before about The Name of the Rose is that one of its main themes is the relationship between empirical evidence (that is, evidence that we can observe, either directly or indirectly) and faith, as exemplified by the epigram for this blogpost.

What Jorgé of Burgos (a thinly veiled portrait of Jorgé Luis Borges) is speaking about is the relationship between empirical evidence and faith. He laments that in past times one's belief was entirely justified by faith, but now (in the 14th century) one's belief was grounded in empirical observation; that is, evidence derived from the observation of "base matter". Jorgé's theology, which could be called revealed theology, was based on scripture and religious experiences of various kinds (especially as portrayed in the Holy Bible and the biographies of the Christian saints).

The "new" way of thinking that Jorgé laments is natural theology, a branch of theology based on reason and ordinary experience, according to which the existence and intentions of God are investigated rationally, based on evidence from the observable physical world. Natural theology has a long history, reaching back to the Antiquitates rerum humanarum et divinarum of Marcus Terentius Varro (116-27 BC). However, for almost two millennia natural theology was a minority tradition in Christian theology.

Paley's argument in Natural Theology is that one can logically infer the existence and attributes of God by the empirical study of the natural world (hence the name "natural" theology). Paley's famous argument of the "watch on the heath" was based on the idea that complex entities (such as a pocketwatch) cannot come about by accident, the way simple "natural" objects such as boulders do. Rather, Paley observes that a pocketwatch clearly has a purpose (i.e. to indicate the time) and is composed of a set of designed, complex, interactive parts (the gears, springs, hands, face, case, and crystal of the watch) which we know for a fact are designed. He then argues by means of analogy that living organisms are even more clearly purposeful entities that must have a designer.

ID is the the science of design detection — how to recognize patterns arranged by an intelligent cause for a purpose [emphasis added]

Fuller takes this definition quite seriously, arguing that the "intelligence" that does the designing in ID exists "outside of matter" (i.e. outside of the natural, physical universe). He then points out that this "intelligence" is "...a deity who exists in at least a semi-transcendent state. But then he poses the crucial question: "[H]ow can you get any scientific mileage from that?"

I would extend Fuller's question by turning it around: How can one get any theological mileage out of the idea that the existence and attributes of the deity can be inferred from observations of the natural, physical universe? This is precisely the program of natural theology, and it is the reason that I believe that natural theology is both intellectually bankrupt and ultimately destructive of belief in God. And, I am apparently not alone in this second belief; several of the comments on Fuller's post express essentially the same misgivings.

The problem here is the problem of theodicy. Fuller asserts that theodicy was originally a much broader topic than it is today. According to him,

Theodicy exists today as a boutique topic in philosophy and theology, where it’s limited to asking how God could allow so much evil and suffering in the world.

However, according to Fuller, theodicy once encompassed

"...issues that are nowadays more naturally taken up by economics, engineering and systems science – and the areas of biology influenced by them: How does the deity optimise, given what it’s trying to achieve (i.e. ideas) and what it’s got to work with (i.e. matter)? This broader version moves into ID territory, a point that has not escaped the notice of theologians who nowadays talk about theodicy. [emphasis in original]

Setting aside Fuller's historical analysis of the meaning(s) of theodicy (which I believe is both incorrect and the reverse of the actual historical evolution of the idea), I believe that Fuller gives Christians who still believe in the primacy of revelation over reason good reason to be concerned about the theological implications of ID:

"[Some theists are] uneasy about concepts like ‘irreducible complexity’ for being a little too clear about how God operates in nature. The problem with such clarity, of course, is that the more we think we know the divine modus operandi, the more God’s allowance of suffering and evil looks deliberate, which seems to put divine action at odds with our moral scruples. One way out – which was the way taken by the original theodicists – is to say that to think like God is to see evil and suffering as serving a higher good, as the deity’s primary concern is with the large scale and the long term.

I have pointed out in an earlier blogpost that this line of reasoning necessarily leads to the conclusion that God (i.e. the "intelligent designer" of ID theory) is a utilitarian Whose means are justified by His ends. As I have pointed out, this conclusion is both morally abhorrent and contrary to Christian doctrine. Fuller agrees, pointing out that "...religious thinkers complained about theodicy from day one":

"...a devout person might complain that this whole way of thinking about God is blasphemous, since it presumes that we can get into the mind of God – and once we do, we find a deity who is not especially loveable, since God seems quite willing to sacrifice His creatures for some higher design principle."

"...it’s blasphemous to suppose that God operates in what humans recognise as a ‘rational’ fashion. So how, then, could theodicy have acquired such significance among self-avowed Christians in the first place...and...how could its mode of argumentation have such long-lasting secular effects...in any field [such as evolutionary theory] concerned with optimisation?

He then goes on to make essentially the same argument as that put forth by almost all ID supporters, an argument by analogy:

We tend to presume that any evidence of design is, at best, indirect evidence for a designer. But this is not how the original theodicists thought about the matter. They thought we could have direct (albeit perhaps inconclusive) evidence of the designer, too. Why? Well, because the Bible says so. In particular, it says that we humans are created in the image and likeness of God. At the very least, this means that our own and God’s beings overlap in some sense. (For Christians, this is most vividly illustrated in the person of Jesus.)

And how, precisely, is this an argument by analogy? Here it is:

The interesting question, then, is to figure out how much of our own being is divine overlap and how much is simply the regrettable consequence of God’s having to work through material reality to embody the divine ideas ‘in’ – or, put more controversially, ‘as’ — us. Theodicy in its original full-blooded sense took this question as its starting point. [emphasis added]

By "overlap" Fuller clearly means "analogy"; that is, how analogous is the "design" of nature (presumably brought about by the "intelligent designer", i.e. God) to human (and therefore divine) "design"? This inquiry, therefore, is based on the assumption that finding such analogies is prima facie proof that "design" in nature is the result of "intelligence" (and therefore, by extension, "divine intelligence").

But, as any undergraduate in elementary logic has learned, arguments by analogy alone are not valid evidence for anything. This is because there is nothing intrinsic to analogies that can allow us to determine their validity. As I have pointed out in an earlier blogpost, all analogies are false to some degree: the only "true" analogy to a thing is the thing itself.

Fuller lists four reasons why theodicy became important at about the same time as natural theology. These are:

• that the widespread publication of the Holy Bible not only facilitated the rise of Protestantism, it also made possible "individual confirmation" of one's "overlap" (i.e. analogy) with the deity;

• that "...theodicists...read the Bible as the literal yet fallible word of God. There is scope within Christianity for this middle position because of known problems in crafting the Bible, whose human authorship is never denied...."

• that "...theodicists...claimed legitimacy from Descartes, whose ‘cogito ergo sum’ proposed an example of human-divine overlap, namely, humanity’s repetition of how the deity establishes its own existence. After all, creation is necessary only because God originally exists apart from matter, and so needs to make its presence felt in the world through matter...."; and

• that the Scientific Revolution shifted the focus of theology from revelation to empirical investigation, grounding belief in God and His intentions in observable reality via arguments by analogy.

Let's summarize all of this before going on. According to Fuller, theodicy entails that:

1) the Holy Bible illustrates the analogies between humans and God;

2) the Holy Bible is an imperfect document, written by imperfect humans (and, by extension, should not necessarily be taken literally);

3) the Cartesian cogito ergo sum provides a paradigm of the analogy between human and divine "intelligence" by pointing to the connections between "supernatural" ideas and "natural" phenomena, and

Here is where I find the connection to The Name of the Rose. Umberto Eco has pointed out that the title of his novel has several allusions, including Dante's mystic rose, "go lovely rose", the War of the Roses, "rose thou art sick", too many rings around Rosie, "a rose by any other name", "a rose is a rose is a rose", the Rosicrucians...there are probably as many meanings as there are readers, and more. Eco asserts that the concluding Latin hexameter,

stat rosa pristina nomine, nomina nuda tenemus ("and what is left of the rose is only its name")

And I agree with his assessment; the name of the rose is not the rose. Or, as Korbzybski put it, the map is not the territory. However, this conclusion can be taken in one of two ways. According to the first (which is based on Platonic idealism), the idea of the rose is what "matters". That is, the idea of the rose pre-exists the rose, and therefore brings the rose into existence. The idea of the rose, therefore, is what is real (hence "Platonic realism"). This is the approach taken by revelation theologists, natural theologists, and ID supporters: that the "design" of the rose (i.e. the "idea" in the "mind" of the "intelligent designer") comes first, and is made manifest in the actual, physical rose.

However, an alternative interpretation is that the rose comes first; our name for the entities which exhibit "roseness" is based on our perception of the analogies between those observed entities we come to call "roses". This is the approach taken by virtually all natural scientists, especially evolutionary biologists. As I have pointed out elsewhere, the "designer" in this case is nature itself; the environment (both external and internal) of the phylogenetic lineage of the entities we call "roses". The "design" produced by this "designer" is encoded within the genome of the rose, and expressed within its phenotype, which is made manifest by an interaction between the rose's genome and its environment.

It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner, have all been produced by laws acting around us. [emphasis added]

Darwin saw the physical world as being entirely regulated by a set of natural laws, including laws which had the effect of producing the "origin of species" and evolutionary adaptations. In his published writings, he declined to attribute the authorship of such laws to a deity, and in his private correspondence he generally refused to speculate on it as well.

This is precisely the same position taken by almost all evolutionary biologists, and is echoed in the words of William of Baskerville, Umberto Eco's protagonist in The Name of the Rose, who at the conclusion of the book says:

"It's hard to accept the idea that there cannot be an order in the universe because it would offend the free will of God and His omnipotence."- William of Baskerville, The Name of the Rose, by Umberto Eco (William Weaver, translator)