Navigation

Disclaimer

Authors are solely responsible for the content of their articles on PandasThumb.org.
Linked material is the responsibility of the party who created it. Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.

It seems that ID has chosen to rekindle the ‘how does evolution create information’ question. See for instance “Richard Dawkins on the Origin of Genetic Information” at EvolutionNews.org where spokesperson Luskin presents this question. And yet, the question has been answered many times, so why are ID activist ignoring these explanations or pretending that it has not been answered succinctly and successfully?

One of the basic claims of ID is that processes of regularity and chance cannot create complex specified information. ID relies here on an equivocation of the term ‘information’ since ID’s definition of information is merely a measure of our inability to explain it. In other words, unlike the complexity and information that science can explain, ID relies on that which science cannot explain (yet?) and calls it complexity or information.

Confused? I bet… Many ID proponents have similarly fallen victim to the bait and switch approach here.

So whenever ID states that science cannot explain complex specified information, all one has to do is point out the tautological nature of the claim. When ID then switches to the more common definition of information and complexity, it is trivial to show how evolutionary processes can indeed generate in principle information and complexity.

The real question then becomes: Where these processes indeed involved in the evolution of life on earth? While science provides a rich framework to study these questions, ID is left at the sidelines, unable to contribute anything relevant since it refuses to constrain its designer, it refuses to provide pathways and processes.

And remember, whenever science proposes a pathway, all ID can do is reject a strawman version of it, namely a pathways based on pure chance. Of course, any non trivial scientific pathway is inaccessible to the calculations needed by ID to make its case.

Back to the question of information and complexity. How does science explain it? Not surprisingly via very simple processes of regularity and chance: namely selection and variation. As many have shown, these simple processes are sufficient to explain the information in the genome. So now the question is not “how does science explain information in the genome” but “how well do science’s explanations perform”? For that we have to take existing genetic data and determine actual pathways. This historic reconstruction is not simple, although there now exist a handful of examples where science has indeed reconstructed the pathways, consistent with evolutionary theory.
ID may of course argue that science still has not provided all the answers, but the mere fact that contrary to ID’s predictions of an Edge, science finds why evolution succeeded.

A good example comes from the work on evolvability and RNA. Contrary to ID’s predictions, RNA shows scale free networks, which themselves can be explained by simple processes of gene duplication and preferential attachment. These scale free networks provide a rich environment for evolution to succeed since it both contributes to the robustness as well as the evolvability of RNA.
The reason is that most RNA structures are close to most other RNA structures in sequence space. In other words, most any RNA structure can, via mutations in its sequence, reach any other RNA structure where most of the mutations are in fact neutral. Such findings help understand why evolution appears to proceed in stasis followed by rapid changes. This is exactly what the evidence suggests and the work on RNA has explained this evidence.

So perhaps ID proponents can help us understand how ID explains the origin of information in the genome? But it is unlikely that we will here any further details on this matter. ID has chosen to remain scientifically vacuous

Dembski wrote:

As for your example, I’m not going to take the bait. You’re asking me to play a game: “Provide as much detail in terms of possible causal mechanisms for your ID position as I do for my Darwinian position.” ID is not a mechanistic theory, and it’s not ID’s task to match your pathetic level of detail in telling mechanistic stories. If ID is correct and an intelligence is responsible and indispensable for certain structures, then it makes no sense to try to ape your method of connecting the dots. True, there may be dots to be connected. But there may also be fundamental discontinuities, and with IC systems that is what ID is discovering.”

Finally, I would like to remind the reader that even if ID were correct that evolutionary algorithms cannot do better than random search, random search is an almost trivially effective search
See for instance this link

Tom English wrote:

The obvious interpretation of “no free lunch” is that no optimizer is faster, in general, than any other. This misses some very important aspects of the result, however. One might conclude that all of the optimizers are slow, because none is faster than enumeration. And one might also conclude that the unavoidable slowness derives from the perverse difficulty of the uniform distribution of test functions. Both of these conclusions would be wrong.
If the distribution of functions is uniform, the optimizer’s best-so-far value is the maximum of n realizations of a uniform random variable. The probability that all n values are in the lower q fraction of the codomain is p = qn. Exploring n = log2 p points makes the probability p that all values are in the lower q fraction. Table 1 shows n for several values of q and p.
It is astonishing that in 99.99% of trials a value better than 99.999% of those in the codomain is obtained with fewer than one million evaluations. This is an average over all functions, of course. It bears mention that one of them has only the worst codomain value in its range, and another has only the best codomain value in its range.

Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.

Post a Comment

Use KwickXML formatting to markup your comments: <b>, <i>, <u> <s>, <quote author="...">, <url href="...">, etc. You may need to refresh before you will see your comment.

Dembski is of course correct, in that we have no clue how Dembski’s god magicks reality into being, and no clue how to GET a clue. You either accept or reject that he’s on the right track; you can’t check it out. Those who reject Dembski’s view of how his god does it can’t in principle be answered through scientific means, nor is there any serious effort to do so. They must be answered through political and administrative means, as has traditionally been the practice in theocratic systems.

The DI’s claim that magic is science probably isn’t taken at face value even by Johnson or Luskin. Explicitly, the goal is to position magic as science with the minimum level of verisimilitude to provide creationist school boards, judges, and ideally legislators and Presidents with a nominally plausible rationale for using civil authority to impose and enforce religious doctrine.

And thus the wedge: If only we can get government’s stamp of scientific approval on creationism, we can use the pervasive authority of the State to indoctrinate children young enough so that verisimilitude is no longer necessary. The critical mass of creationism-supporting voters is already there across much of the US, we have a born-again President who’s been stuffing the Supreme Court with religious lackwits; the wedge is working.

The effort to protect the word “information” from the DI’s carefully orchestrated semantic void is certainly worthy. As Orwell taught us so well, we can’t think straight when the words we think with have been hijacked. Yet I don’t think the average Kansas-style voter really much cares what “information” means in any rigorous or technical sense. It’s just WORDS they can use to justify convictions that the educational system, in order to minimize hassles and complaints, sidestepped around correcting.

I’m not qualified to evaluate the technical meanings of “information” or “scale free networks” or “the lower q fraction of the codomain” (huh?), nor am I inclined to make what I recognize would be the significant effort of reaching that point. I either accept that evolution is creative as I understand creativity, or that it is not. If I start with the unquestionable conviction that goddidit, then evolution didn’t do it. Now, do I want my child to have a solid technical understanding of science, or do I want my child to get into heaven? The DI’s overriding goal is to convince me that these choices are mutually exclusive, at least until we can get Jesus back into science classes where he belongs (and get anti-Jesus science OUT).

These people wouldn’t know the technical definition of information from a hieroglyph. If they knew the first thing about Shannon’s Information Theory they’d know that it is precisely random processes that are the sources of information - the more random the process, the more information it generates.

So, in short, the information comes from mutations and the close match of genome to environment, IDers so called “specification,” comes from natural selection.

There, I explained it in one sentence with two paragraphs of background - the second paragraph being the unwritten one about natural selection.

The adaptive immune system is a straightforward example of biology creating information. Vertebrates can make antibodies to a vast array of substances, including novel chemical compounds. Not only does the immune system create immense variation, but it does so an a time scale of weeks. This is a time scale that even creationists can comprehend.

My vita is available, as it has been, at my web site. I am not going to counter with a biosketch — I hate the things. But I will hint at why my work is more relevant to ID than is that of a creationist interested in biodiversity.

In 1996, six years prior to the release of Bill Dembski’s No Free Lunch, I argued that “no free lunch” in search is a consequence of conservation of (Shannon) information. This should have a familiar ring for many of you here. I also established that, contrary to intuition, optimization is easy under the assumptions of the “no free lunch” theorems. It took some time for IDists to catch on to that — there are still some who have not. (By the way, this was the first theoretical paper I ever wrote, and it is far from my best work on “no free lunch.”)

I gave well-publicized tutorials at major conferences when Bill was completing NFL:

In 2000, I showed that almost all fitness functions are algorithmically random. This is definitely something that IDists need to comprehend, but that most have not. Bill seems to deal with it now by stipulating fitness functions “of interest” in his work. Presumably a function is of interest only if it is compressible.

I hope IDists will understand that what drew me into the debate over ID was an overlap in Bill Dembski’s interests and my own. I am more comfortable with the topics of search, information, and complexity than are most IDists. In fact, I have spent considerable time studying Bill Dembski’s arguments, and I would be highly surprised if Thomas D. English can relate them more accurately than I can.

The problem, as I see it, as that the term “information” is applied as an analogy (like “genetic blueprint,” “genetic code,” etc.), yet DNA is NOT information. Yes, we can glean “information,” as humans define it, from the “code,” but it is not information in and of itself. It is a set of chemical reactions that have been honed by natural selection into recurrent processes, just as the principles of physics shape crystalization of water elements into snowflakes. What sets life apart from other physical phenomena is self-reproduction, and we are getting close to understanding possible mechanisms by which that may have happened. Once it is started, there is no need for an informant of the “information” process, because it is not technically “information.”

Steve S suggested that I share what I wrote to him and PvM. I originally planned to silently take the heat for affiliating myself with the EvoInfo lab. I believe that scholars should associate freely, and without explanation. The reason I’m deviating from my plan is that I am appalled to see the mileage the ID movement is getting out of the affair at Baylor. The lab is not doing ID, and I am not an ID convert.

Shall I recite my catechism? A 1987 Supreme Court decision torpedoed instruction in creation science in public-school science classes. An attorney by the name of Johnson salvaged everything in creation science but the “making” and christened the tub intelligent design. Thus from the outset there has been no intellectual legitimacy in the concept of ID. It is simply a lawyer’s strategy for passing legal muster while giving up as little as possible. Of course, there has followed a substantial effort to make it appear that the dogma of a sociopolitical movement actually has intellectual roots. The majority of “scholarly” writing on ID is propaganda.

Tom wrote:

I intended for all in the know to know I was posting under an alias at PT. I once told Bob Marks I didn’t care to make public statements on the EvoInfo (virtual) lab, and I’m using the alias to stick to what I said, if only legalistically. Please do not make my identity widely known.

The reason I’m posting is that I’m getting pissed, watching the [ID movement] use the affair at Baylor to get ID cast in a sympathetic light in the media. I joined the lab to protest Baylor’s infringement of Bob Marks’ academic freedom. In my opinion, it’s impossible to discriminate against ID, because it’s a sociopolitical instrument that has no intellectual legitimacy.

If you look at the discussion page of the Wikipedia article on “no free lunch theorems,” which I maintain, you’ll see that I decided in June, prior to the controversy, that there was no ID at the EvoInfo lab’s site. My concern at the time was only to avoid bias in the article. It happens that Marks’ definition of evolutionary informatics covers most of my research of the past twelve years, no matter its motivation. It also happens that I was expelled from a Baptist institution 30 years ago for opposing discrimination against women, and I have strongly supported freedom of expression ever since – especially for those I don’t agree with. It seemed to me I was, like it or not, the perfect person to step forward and support Marks. I resisted for a while, worrying about what sort of games I might be sucked into. Now I’m in the difficult position of not only backing a guy everyone I know in the IEEE Computational Intelligence Society regards well, but aiding and abetting the ID movement. I believe that individuals are higher than causes, and academic freedom is a higher cause than opposition to ID, so I’m really just whining about being in a tight spot.

I don’t know the pub date for the book. […] I think I parsed Dembksi’s latest definition of specified complexity more closely than anyone else has. I rewrote his expressions with more explicit notation to expose the details of what he was saying. I also identified some severe computability problems it seems no one else has. I’m not saying I did more than Wes Elsberry and Jeffrey Shallitt did with an earlier version. But I think I’ve been lucky enough to address the final morph of specified complexity. It appears Dembski has given up on the concept.

I have contacted my editor to see if I can share my chapter with you. He’s Down Under, so it may take a while to get in synch.

Thanks Tom. I have been following you over the last few years and I was surprised to hear that you had joined Marks’s lab. Although, once you explained your reasoning, I am accepting your reasoning even though, as you may have noticed by now, ID seems to be getting quite some mileage about how ID is being censored, even though, as you point out, the papers and the work by the lab have little relevance on ID.

The three papers on the site may be seen as arguing against evolutionary processes as a source of information, even though on closer scrutiny, they have far less to argue than one may expect from Dembski’s bragging

WD: It’s too early to tell what the impact of my ideas is on science. To be sure, there has been much talk about my work and many scientists are intrigued (though more are upset and want to destroy it), but so far only a few scientists see how to take these ideas and run with them. There’s a reason for this slow start. My work in The Design Inference was essentially a work on the philosophical foundations of probability theory, trying to understand how to interpret probabilities in certain contexts. This led naturally to some ideas about information and the type of information used in drawing design inferences. My book No Free Lunch was a semi-popular overview of where I saw the ID movement headed on the topic of information. The hard work of developing these ideas into a rigorous information-theoretic formalism for doing science really began only in 2005 with some unpublished papers on the mathematical foundations of intelligent design that appeared on my website (www.designinference.com). With the formation of Baylor’s Evolutionary Informatics Lab just this month and work by me and my colleague Robert Marks on the conservation of information (several papers of which are available at www.evolutionaryinformatics.org), I think ID is finally in a position to challenge certain fundamental assumptions in the natural sciences about the nature and origin of information. This, I believe, will have a large impact on science.

before engaging a creationist in a discussion on information, you need to very clearly ask “by information, do you mean an intended message by an intelligent being”? then beat with a stick until they indicate they are prepared to argue about something sensible

Evolutionary processes search for biosystems just as tornadic processes search for trailer parks. In other words, I think it’s a huge mistake to regard evolution as a search process. Search implies a target, and what Marks and Dembski may well show is that it is infeasible for “evolutionary search” to hit certain “biological targets.” That would be fine with me. They would count it as evidence for a supernatural source of information, but others would come to the fore with search-free models of evolution.

As I see it, complex specified information is dead, and active information is its replacement. But I hasten to add that while CSI had serious shortcomings, active information is reasonable and interesting.

I don’t see evolutionary processes as creating information, but as latching it in. I’m going with haploid organisms here. In the stage of evolution known as reproduction-with-variation, the variation is not reasonably attributed to evolutionary processes. The organism “tries” to make a perfect copy of itself, and random errors are caused by, for instance, thermodynamic effects. Errors can either increase or decrease the Kolmogorov complexity (algorithmic information) of the genome. Any change in the information of the genome indicates that there was information in the random errors. I think it’s very important to place the source of information (randomness) outside the evolutionary processes, partially to avoid “creation,” and partially to reflect the fact that evolutionary processes get information “for free.” Incidentally, Paul Davies, commenting on the informational physics of evolution, suggested that evolution is analogous to a Brownian ratchet. I like that idea, though I’m not sure how close the analogy is.

Well, isn’t this thread of to a kick ass start. PvM’s post is thorough, and the comments are excellent.

PvM wrote:

ID relies here on an equivocation of the term ‘information’ since ID’s definition of information is merely a measure of our inability to explain it.

Acute.

PvM wrote:

So now the question is not “how does science explain information in the genome” but “how well do science’s explanations perform”? For that we have to take existing genetic data and determine actual pathways.

This addresses so many problems with ID. Information in the genome isn’t interesting for biology in the light of evolutionary mechanisms. (Which unfortunately for ID doesn’t require teleology.)

And when we use a specific information measure to characterize some structure, the question remains if it is descriptive and above all predictive. Is it useful, and how useful is it?

This is also a fundamental difficulty with ID’s idea of criticizing models such as EA’s or biologically inspired EA’s for failures because of “experimenters input of information”. These models are describing and testing predictions. Any artificial source of information (if needed) is a natural part of the experiment, and doesn’t mean they fail as specific tests. If that would be the case, all experiments testing scientific predictions fails.

“before engaging a creationist in a discussion on information, you need to very clearly ask “by information, do you mean an intended message by an intelligent being”? then beat with a stick until they indicate they are prepared to argue about something sensible.”

I agree. I once gave a lecture and said something about the information in the genome. Someone raised their hand and asked’ “does that imply intelligence”, meaning of course does that imply an intelligent cause for the information. I replied that there is informatioin in the periodicity of a pulsar but that does not mean that the pulsar is intelligent or that an intelligence was required in order to create it. However, intelligence is required to interpret the information. I don’t know if that is the best answer I could have given, but as you point out, that was certainly the assumption implicit in the question.

Tom,

Thanks for sharing your thoughts with us. I commend you on your courage in defending academic freedom. If you are still reading posts here, perhaps you could give us the benefit of your expertise. What do you think is the best definition of information and what do you think is the best way to measure it?

By the way, I completely agree that evolution is not a “search” in the ordinary sense and that it definately does not involve a “target” in the ordinary sense. Creationists and ID proponents always seem to inject anthropomorphisms into their arguments, I wonder why that is?

I would like to thank you for taking a swift and decisive action, that should quell speculation and rumors. It was also interesting to see your reasoning and concerns.

That doesn’t mean that there doesn’t remain concerns to discuss. (When did it ever? :-P)

Tom English wrote:

I decided in June, prior to the controversy, that there was no ID at the EvoInfo lab’s site.

I note that Marks and Dembski’s papers criticize evolutionary biological models especially, and that they do so based on ID ideas that the modeled structure “smuggles in” information. As this isn’t a neutral view on algorithms but a specifically chosen example my own decision is different.

That difference in opinion shouldn’t be a problem. After all, we should not want to block Marks or Dembski from presenting their ideas.

Tom English wrote:

I think it’s very important to place the source of information (randomness) outside the evolutionary processes,

Certainly information (measured as randomness) is created by the environment, but it is also created by biological mechanisms such as sexual recombination or biological processes such as selection or drift (as in both cases traits are fixed in a probabilistic process).

Since information is a property of a system it seems difficult to me to tease out how much is created by engaging with the environment and how much is created by the evolutionary process itself.

Incidentally, discussing analogies, some population models are analogous to bayesian inference models used in machine learning. The populations genomes (the distribution of alleles over the population) is analogous to a learning machine that learns of the environment by trial and error.

[Btw, which knowledge becomes obsolete when the environment changes, so has to learn anew. The natural specification of the object, how the resulting traits work out in fitness terms, to measure (as well as the object itself) changes with the environment. So this isn’t exactly analogous to the minimal description length algorithmic information measure, at least as far as this layman understands.]

Tom English wrote:

Paul Davies, commenting on the informational physics of evolution, suggested that evolution is analogous to a Brownian ratchet.

I’m not sure what you, or Davies, means here? AFAIK a Brownian ratchet is an unphysical perpetual motion suggestion. How would such an analogy bear on the real world?

Neither marks, nor Dembski, nor yourself, nor I know how to define the real world probabilities of hitting a specific “target” wrt to a specific trait.

even knowing ALL the competing selective pressures, even in simple systems like John Endler’s Poecilliids (guppies) in Trinidad, it is still not easy to predict exact probabilities with any confidence. general directions to expect, yes, but exact probabilities? no.

hence the entire argument from “improbability” is completely flawed to begin with.

it really is that simple.

mental masturbation and playing with models is one thing, but what we see in the real world has to fit the models, too.

Tom English, I’m glad to hear from you and have some things cleared up. I am surprised to read, though, that imperfect replication of DNA is not part of evolution, or an ‘evolutionary process’ as you put it. I think what you are saying, sans surprising use of terms, is that variation is the source of information rather than selection plus drift. Some would say it is the combination of these, but it is all part of evolution.

Torbjorn the unprintable, you must be thinking of some other Brownian ratchet. The biological ones are not ‘unphysical’.
Search molecular motor brownian.

I’m currently reading The Touchstone of Life by Werner R. Loewenstein. From this, I get the impression that evolution doesn’t so much “create” information as reflect self-organization within flows of information. One of his starting points is that information is originally carried in photons and they of course are abundant. If his is the better concept, then the question isn’t so much how information is created but rather how it is captured and how and why the molecular forms which carry it develop and interact the way they do (a question which evolutionary biology addresses and ID doesn’t). I guess I wonder if anybody is familiar with Loewenstein enough to say a) if I’m understanding him correctly and b) if his treatment of information and biological evolution is mainstream and productive of useful lines of research.

Robert Marks’ good past research: some people are perfectly reasonable as long as the subject is not creationism….

Dembski and Marks at Baylor: it may just be that Baylor and Dembski have a history and Baylor does not want him back, period. He and Marks can do all the research they wish with Dembski affiliated with his Seminary, provided they can figure out anything to do that suits the creationist agenda. All that has happened is that Dembski’s attempt to claim to be from Baylor again, and to have a grant from a regular foundation, did not work. He is a seminarian, and the grant was actually part of his support from Disco. It’s not like Dembski now has no affiliation; he just doesn’t have one to match his ego. Sometimes in life, one just can’t have one’s heart’s desire.

Perhaps what I am going to say here has some relevance to both the question of the connection between ID and Marks’s virtual lab, and to Tom English’s remark about Dembski’s model of evolution as a search algorithm.

In the past I posted at least two entries right here on PT and also on Talk Reason where I argued against Dembski’s model - a search for a small target in a large search space. (See for example here). As usual, Dembski ignored my comments, which is OK - my intention was far from a desire to convince Dembski.

Now about Tom English’s comment (I never knew there are two Tom Englishs: the one I have some knowledge about is imho a highly qualified expert on NFL). Now Tom asserts that Marks is a great scientist and has nothing to do with ID. While Marks indeed may be an excellent scientist, I take the liberty to doubt assertion of his being not in cahoots with Dembski. Here are some facts (besides having Dembski affiliated with Marks’s lab). Some time ago a Swedish mathematician Olle Haggstrom published an article critical of Dembski’s concepts (see here). After a while, a reply to Olle was posted authored by Marks and Dembski (see here). This reply maintains that evolution necessarily has certain intrinsic targets and therefore Dembski’s model is valid. This article is imho unsubstantiated; it brazenly asserts that all Olle’s arguments not only do not disprove Dembski’s thesis but in fact support it. Such a chalenging view of Marks/Dembski is not supported by any substantial arguments but just declared as self-evident. Maybe Marks is great in his field, but his (with Dembski) anti-Haggstrom paper seems to show, first that he shares Dembski’s pro-ID views, and, second, that perhaps he is not really great beyond his field. Just IMHO of course.

All of this has no bearing on Tom’s decision to join Marks’s lab and I wish him success in that endeavor.

I’m glad that discussions on the actual content of Marks and Dembski’s work are starting to pop up. Here are my own beliefs on the EIL work:

1) Casting the concepts in information terms serves only to create confusion. The issues are much clearer when expressed in terms of probabilities. According to Shannon surprisal, which is the information measure that D&M use, D&M’s exogenous information measures the amount of information in the success-or-failure outcome of the baseline search, not the amount of information in the search parameters. Likewise with endogenous information, and active information doesn’t measure the amount of information in anything.

2) The “active information” measure yields a number that has no useful significance. It doesn’t tell us anything of use that we didn’t already have to know in order to calculate it.

3) The “active information” measure is always relative to a somewhat arbitrary baseline. Apparently, the baseline should be a blind search, but over what search space? And using what search structure? For instance, M&D’s response to Schneider discusses two related blind searches, one far more efficient than the other (so they say) because of its search structure. Which of these should we choose as a baseline for Schneider’s ev search?

4) Applying the EIL concepts to evolutionary processes in the real world requires that a target be defined. How do we do this? D&M’s notion of “intrinsic targets” has very obvious logical problems.

As an interesting sidenote, a very obvious discrepancy can be found by comparing Schneider’s Evolution of biological information to M&D’s response. It turns out that the problem is on M&D’s side, and it stems from a bug in one of their MATLAB scripts. The problem is so huge that M&D are going to have to rewrite that paper. 50 panda points for whoever can find the bug.

Perhaps what I am going to say here has some relevance to both the question of the connection between ID and Marks’s virtual lab, and to Tom English’s remark about Dembski’s model of evolution as a search algorithm.

In the past I posted at least two entries right here on PT and also on Talk Reason where I argued against Dembski’s model - a search for a small target in a large search space. (See for example here). As usual, Dembski ignored my comments, which is OK - my intention was far from a desire to convince Dembski.

Now about Tom English’s comment (I never knew there are two Tom Englishs: the one I have some knowledge about is imho a highly qualified expert on NFL). Now Tom asserts that Marks is a great
scientist and has nothing to do with ID. While Marks indeed may be an excellent scientist, I take the liberty to doubt assertion of his being not in cahoots with Dembski. Here are some facts (besides having Dembski affiliated with Marks’s lab). Some time ago a Swedish mathematician Olle Haggstrom published an article critical of Dembski’s concepts (see here). After a while, a reply to Olle was posted authored by Marks and Dembski (see here). This reply maintains that evolution necessarily has certain intrinsic targets and therefore Dembski’s model is valid. This article is imho unsubstantiated; it brazenly asserts that all Olle’s arguments not only do not disprove Dembski’s thesis but in fact support it. Such a chalenging view of Marks/Dembski is not supported by any substantial arguments but just declared as self-evident. Maybe Marks is great in his field, but his (with Dembski) anti-Haggstrom paper seems to show, first that he shares Dembski’s pro-ID views, and, second, that perhaps he is not really great beyond his field. Just IMHO of course.

Tom also asserts that, unlike Dembski’s CSI, “active information” (discussed in particular in Marks/Dembski’s anti-Haggstrom article) is a useful and reasonable concept. I agree that this concept as such may be construed as reasonable. However, the question is not whether AI as a concept has contents, but rather whether or not evolutionary algorithms can only succeed if the AI is ether front-loaded or supplied from outside sources. This question is related to both Dembski’s “displacement problem” and the essence of the NFL theorems. Neither Dembski nor Marks offer any evidence that AI necessarily must be added to what they call “endogeneous information.” They simply claim that this is so (which is just another representation of the “displacement problem.”) In fact, as long as we stay within the framework of the NFL theorem, they are only valid for “black box” algorithms which by definition have no access to any information besides that accumulated by the search algorithm in the course of exploration of the fitness landscape and gleaned exclusively from that landscape. They neither possess a front-loaded AI nor receive it from outside during the search. This however does not prevent certain specific algorithms to immencely outperform blind search, which is just a routine occurrence. Therefore all the talk about AI is as irrelevant to biological evolution as the talk about CSI of the NFL theorems.

All of this has no bearing on Tom’s decision to join Marks’s lab and I wish him success in that endeavor.

Neither marks, nor Dembski, nor yourself, nor I know how to define the real world probabilities of hitting a specific “target” wrt to a specific trait.

even knowing ALL the competing selective pressures, even in simple systems like John Endler’s Poecilliids (guppies) in Trinidad, it is still not easy to predict exact probabilities with any confidence. general directions to expect, yes, but exact probabilities? no.

hence the entire argument from “improbability” is completely flawed to begin with.

it really is that simple.

mental masturbation and playing with models is one thing, but what we see in the real world has to fit the models, too.

I have made similar remarks in arguing that specified complexity is generally not computable, and mostly agree with you. But something you might consider is that exact probabilities are not necessarily required. In search, some quantities grow very rapidly and others shrink very rapidly. It is conceivable that someone could establish an inequality on very imprecise quantities that would make a persuasive argument that “evolutionary search” could not have “hit targets” without an extrinsic source of information.

To expand a bit on what I said above, it seems to me that teleology is inherent in the search metaphor. Most of us believe, at least when we stop and think, that it’s just a metaphor, or perhaps that it’s a model that must be taken with a grain of salt, and that evolutionary processes do not have the end of hitting targets. In my opinion, Marks and Dembski take the metaphor literally, and it is important to keep that in mind.

And when we use a specific information measure to characterize some structure, the question remains if it is descriptive and above all predictive. Is it useful, and how useful is it?

You might have a look at Paul Vitanyi’s recent work on the Kolmogorov structure function. It is not only up-to-date, but gives the most accessible treatment of Kolmogorov complexity and the structure function I have ever seen.

Quite a number of researchers have looked at the Kolmogorov complexity of genomes. Perhaps this measure of information is not relevant to all investigation, but it certainly is to some.

What do you think is the best definition of information and what do you think is the best way to measure it?

That’s an interesting question. The only answer a computer scientist like me can give is that you have to choose a measure of information that is appropriate to the application. Even in the area of Kolmogorov complexity, there are two fundamental measures, plain complexity and prefix complexity. The standard text in the field emphasizes that neither is better than the other.

Now consider the notion, popular with IDists, that information is a physical primitive, like mass and energy. Many people – especially quantum theorists and associated philosophers – are busy debating just what this could mean. How would we define the physical primitive when we have multiple notions of information that are reasonable and useful? There are various other problems in treating information as a primitive that I’m vaguely aware of, but not qualified to discuss.

Creationists and ID proponents always seem to inject anthropomorphisms into their arguments, I wonder why that is?

Hmm. Could it be that if I’m created in the image of God, then God must be like me? I watched the classic silent film “The Passion of Joan of Arc” a couple nights ago. At one point Joan says, “His ways are not our ways.” I wish that saintly view were more prevalent.

But I hasten to add that while CSI had serious shortcomings, active information is reasonable and interesting.

You share that opinion with Mark Chu-Carroll, and I put a lot of stock in both of you, but for the life of me I can’t see it. As an information measure it seems far more obfuscatory than enlightening. Maybe I’m missing something.

Obviously some searches have a higher probability of success than others. I suppose it makes sense to take the quotient of the respective probabilities for two searches and use it as a measure of how much more effective one search is than the other. But why take the negative log of that quotient and call it an information measure? Where’s the information that we’re measuring?

“The event A associated with the active information is defined such that P(A) = P(B)/P(E) where B is the success of a baseline search and E is the success of an efficient search. Let’s pretend that the parameters of the efficient search were chosen from all possible parameters; we’ll call this selection event X. NFL tells us that P(X)*P(E|X) <= P(B), so P(X) <= P(B)/P(E|X), so P(A) >= P(A). After the negative log transformation, I(A) <= I(X). That is, the active information measure gives us a lower bound on the information associated with the selection of the particular search.”

Am I on the right track, Tom? If so, then how is the active information measure useful?

Mark Perakh reminded me of another beef that I have with the EIL project. Both Marks and Dembski have connected the work to design arguments, but they’ve never elaborated on that connection, leaving it to their fellow IDists to “fill in the details”, in typical Dembskian fashion. For instance, in their response to Haggstrom, they say:

Marks and Dembski wrote:

Thus, when Haggstrom writes, “In almost all concrete optimization problems, we have some prior information,” he does not in fact “refute [design arguments] … irreparably.” He merely restates them.

IDists should, at this point, be scratching their heads and wondering how the prior existence of information constitutes a restatement of design arguments. What’s the necessary connection between information and design? Marks and Dembski will never tell us explicitly, because there is none.

As usual, a nice description of the “Brownian Ratchet and Pawl” is given by Richard Feynman in Volume I, Chapter 46 of the Feynman Lectures on Physics.

The key point, and a source of much confusion, is the uniformity or non-uniformity of the temperature. If there is damping in the pawl, there is implicitly a flow of energy, which in turn implies a temperature difference that allows the ratchet and pawl device to run “uphill”.

Another key point that is often missed is that temperature is essentially the kinetic energy per degree of freedom in a system. Transfers of energy can take place differently in different directions if momentum transfers are different in different directions. That is why adding salt to ice lowers the temperature. The salt breaks molecular bonds in the frozen water, which opens up more degrees of freedom for the same amount of energy, hence the kinetic energy per degree of freedom (temperature) drops. If this kind of situation occurs in a system in which momentum transfers in some directions are changed and in other directions are not, a system can crawl “uphill” in a given direction. Molecules and atoms on catalytic surfaces can be such an example.

In fact, there are many situations in physics where these kinds of events happen. Calling it “added information” may be technically correct given a proper definition of “information”, but calling it intelligently added information is extremely misleading.

And yes, that is exactly the point I was alluding to. Starting from the assumption that “God made me in his image” and “God made the universe especially for me” would seem to predispose one to extensive anthropormorphism and other logical fallacies in trying to understand nature. That might not be the best way to understand reality.

It is conceivable that someone could establish an inequality on very imprecise quantities that would make a persuasive argument that “evolutionary search” could not have “hit targets” without an extrinsic source of information.

my use of the word “exact” is in fact far too extreme; even general areas of probability calculations are fraught with grand assumptions.

calculating probabilities is not possible at all if you are looking at the evolution of traits in the past (fossil record).

there is insufficient information available to even generate GENERAL assumptions regarding probabilities in that case.

all we have available is looking directly at how specific selection pressures (or lack thereof) influence the directions of traits we can measure.

there is no way to generate any realistic measure of probability for say, the evolution of the first invertebrate eye.

it is not possible without huge assumptions, not based on ANY real world data whatsoever.

Looking at measures of probability even within currently well studied systems is damn near impossible, though of current interest to several evolutionary biologists who have been musing about how to mathematically calculate it. I’m blanking on names at the moment, but when they pop in my head, I’ll add them in.

suffice it to say, even people who have been studying the issue for decades don’t pretend to be able to formulate even general probability estimates for the specific directions a trait will take in the field. Unless the population under consideration is entirely controlled for all relevant selection pressures (lab, and even then, difficult), generating remotely realistic probability estimates on the frequency of specific alleles within that population after X generations would be difficult, at best. There are surely exceptions; as the behavior of all alleles within a given population are not equivalent, and large scale mutations like chromosal duplication would likely have quite predictable results over several generations. Still, I think my point is at least a bit clearer now.

what Dembski et. al. have been trying to do isn’t even based on extant studies of allele frequencies; it’s based entirely on assumptions they basically pulled out of their collective asses.

sorry, but that’s not science, and until one of these folks starts to actually base their conceptualizations on real world data, there’s little reason to take it seriously, even if the math “works out”.

To expand a bit on what I said above, it seems to me that teleology is inherent in the search metaphor. Most of us believe, at least when we stop and think, that it’s just a metaphor, or perhaps that it’s a model that must be taken with a grain of salt, and that evolutionary processes do not have the end of hitting targets.

Teleological statements about how processes proceed are certainly not uncommon, nor are they necessarily bad, provided one understands that there are mechanisms behind the seeming purposeful progression of a process. We use these metaphors all the time, even when describing physical processes that have no hint of purpose.

For example, when we set up a problem, such as that of a flexible cable suspended between two points, and look for the shape that minimizes the potential energy of the cable, we are not presuming that the cable knows that it has to minimize its potential energy. In fact, there is a deeper presumption going on here, namely that the cable can fall into a shape that minimizes the potential energy because there are paths by which the energy in the cable can be dissipated. If this were not the case, the cable would flop around among many different configurations as the energy transfers among many modes of oscillation and then back again with the total energy remaining constant.

These same kinds of processes occur in any physical system. If, in the process of dissipating energy that had been transferred into a system, that system gets trapped in a state that is relatively improbable because it got pumped into that state as energy flowed in, it doesn’t mean that the system sought that state; it simply fell into that state. And this is a key point; systems that tend toward end states that look purposeful or improbable are in fact dissipating energy that was put into them.

Our use of teleological statements to describe these circumstances simply reflects our familiarity with the outcomes. We tend to cast many of our problems in these terms because, in a very real sense, it concisely describes what happens even when we are not clear about the underlying physical mechanisms leading to these end states.

The major inconsistency I see in the ID literature is that they make the claim that certain end states are extremely improbable because they have an underlying assumption that they are only one of an extremely large number of states that exist simultaneously. In other words, their systems never dissipate energy. So all available states are possible with “equal probability”, therefore configurations representing life are so improbable that they must be achieved with some purposeful act. These are the same “tornado in a junkyard” arguments made by the earlier Creationists, only now applied to molecular level systems.

I note that Marks and Dembski’s papers criticize evolutionary biological models especially, and that they do so based on ID ideas that the modeled structure “smuggles in” information. As this isn’t a neutral view on algorithms but a specifically chosen example my own decision is different.

Their motivations are certainly recognizable and familiar. But I think the real questions are if they are playing by the rules, and if their research has interpretations and applications other than the ones they focus upon. My answer to both, on the basis of the papers themselves, is affirmative.

Certainly information (measured as randomness) is created by the environment, but it is also created by biological mechanisms such as sexual recombination or biological processes such as selection or drift (as in both cases traits are fixed in a probabilistic process).

I meant to rule out sex. Sorry for the confusion about that. I agree that random processes play in various aspects of evolution. Whether we call them biological or not depends on how we draw boxes. I don’t have much confidence in my ideas on box-drawing, except in the case of mutation.

Incidentally, discussing analogies, some population models are analogous to bayesian inference models used in machine learning. The populations genomes (the distribution of alleles over the population) is analogous to a learning machine that learns of the environment by trial and error.

A man after my own heart. It seems to me that ontogeny is much like a chain of behaviors extended through trial and error. My undergrad studies in experimental psychology might bias my perspective just a bit. Once, at ARN, I observed that trial-and-error learning in squirrels has enabled them to defeat a great number of intelligently-designed, “squirrel-proof” bird feeders.

[Btw, which knowledge becomes obsolete when the environment changes, so has to learn anew. The natural specification of the object, how the resulting traits work out in fitness terms, to measure (as well as the object itself) changes with the environment. So this isn’t exactly analogous to the minimal description length algorithmic information measure, at least as far as this layman understands.]

Again, a man after my own heart. Not only does fitness change with the environment, but organisms change the environment, and thus fitness. I know for a fact that such coupling and feedback makes for a complex dynamical system. Fitness is a tricky concept.

The major inconsistency I see in the ID literature is that they make the claim that certain end states are extremely improbable because they have an underlying assumption that they are only one of an extremely large number of states that exist simultaneously.

Or at least they claim that using only “natural laws” those end states are extremely improbable. In their model those end states are a virtual certainty, but the use of natural processes to get there is for some reason unacceptable.

I especially appreciate a gentle response from an active and effective opponent of ID like you.

I believe it was late last year that I glanced at the response by Marks and Dembski. I noted a new “information” as I would a new chunk of space debris, and guessed that Marks was openly involved in ID. I have not read the two papers yet, but seeing active information invoked when Haggstrom’s criticism is of vintage 2002 ID makes me queasy.

It appears that ID and what came to be evolutionary informatics overlapped last year. I have no desire to be defensive about my affiliation with the lab, or to sweep anything under the rug. I can only say that I hope Marks will keep evolutionary informatics cleanly separated from ID in the future. I believe that Marks and Dembski have the goals of IDists, but I must back them when they “play fair.”

It is very common for molecular biologists to speak of cells as exporting entropy. I’m not sure about the formal correctness of what they’re saying. But my point is that there’s an established notion of information moving between biosystem X and not-X.

I don’t see how a closed system can do much in the way of evolution. I am far from the first person to suggest that evolution depends critically on the fact that the earth is an open system far from equilibrium. (Granville Sewell’s attempts to support ID with the Second Law are ludicrous.) So I was trying to say that I think evolutionary systems should be modeled as open. I cannot see how an organism implements a mutation operator. It seems that a non-biological entity causes the mutation. Perhaps I am missing something.

I’m getting a bit weary, and will have to focus on my work soon. Forgive my rush here.

Now wait just a minute here. Academic freedom is all well and fine and playing fair is all well and good, but when you rule out sex I really must object. That kind of neoconservative abstinance only crap is … What? Oh … never mind.

If you will notice, Casey Luskin’s article “Richard Dawkins on the Origin of Genetic Information” has been updated. I emailed Casey yesterday and mentioned that the EvolutionNews blog statement of purpose is as such:

“The misreporting of the evolution issue is one key reason for this site. Unfortunately, much of the news coverage has been sloppy, inaccurate, and in some cases, overtly biased. Evolution News & Views presents analysis of that coverage, as well as original reporting that accurately delivers information about the current state of the debate over Darwinian evolution.”

Yet he failed to mention that Dawkins rebutted the video and very nicely answered the challenge: http://www.skeptics.com.au/articles/dawkins.htm. This is a prime example of “sloppy, inaccurate, and in some cases, overtly biased”. A quick google search of “Dawkins and Information” would have been enough research to have discovered Dawkins response.

Therefore, while I commend Luskin for posting the rebuttal, he still is not excused from poor research. He also doesn’t admit that Dawkins answered the question (Luskin says: Read Dawkins’ response at http://www.skeptics.com.au/articles/dawkins.htm and see if he still has yet to satisfactorily answer the question!).

Now that Luskin admits that Dawkins answers the “Information Challenge”, I think someone on Panda’s Thumb should challenge Luskin, who says, “Read Dawkins’ response…and see if he still has yet to satisfactorily answer the question!”, to show where Dawkins is wrong. Don’t let him off the hook here.

Now that Luskin admits that Dawkins answers the “Information Challenge”, I think someone on Panda’s Thumb should challenge Luskin, who says, “Read Dawkins’ response…and see if he still has yet to satisfactorily answer the question!”, to show where Dawkins is wrong. Don’t let him off the hook here.

Are you guys really trying to tell us ANYBODY is still bothering with that old video?

calculating probabilities is not possible at all if you are looking at the evolution of traits in the past (fossil record).

I read this and the rest of your post as saying that that argument from improbability is sure to be closely akin to argument from ignorance. I’ve certainly made similar remarks, and I haven’t entirely abandoned them. But footnote 4 on page 7 of Conservation of Information in Search: Measuring the Cost of Success gives me pause. It happens that I am making similar observations in a paper I’m writing. Of course, I am not working toward the same conclusions that Marks and Dembski are.