When ignorance applies scientific vacuity

Just when you think you have seen a new low in scientific ignorance amongst ID activists, Salvador Cordova comes to the rescue by arguing that

Salvador Cordova Wrote:

In information science, it is empirically and theoretically shown that noise destroys specified complexity, but cannot create it. Natural selection acting on noise cannot create specified complexity. Thus, information science refutes Darwinian evolution. The following is a great article that illustrates the insufficiency of natural selection to create design.

In fact, quite to the contrary, simple experiments have shown that the processes of natural selection and variation can indeed create specified complexity. In other words, contrary to the scientifically vacuous claims of Sal, science has shown that information science, rather than refuting Darwinian evolution, has ended up strongly supporting it.

So what causes this significant level of confusion about evolutionary theory, and information theory?

Information theoretical concerns

Contrary to Dembski’s claims, it has been shown that evolutionary algorithms can in fact create complex specified information [1]. Dembski admits as much in his criticisms of Tom Schneider’s work on Ev.. Rather than arguing that evolutionary algorithms cannot generate complex specified information (CSI), Dembski tries to argue that CSI has been ‘smuggled’ in by the algorithm. In other words, algorithms can generate CSI but rather than being actual, it is apparent. Despite Wesley Elsberry’s “Algorithm Room” challenge, Dembski has been unable to explain how to differentiate between actual and apparent CSI. In other words, whenever we detect ‘CSI’ we cannot establish its origins without additional information.

In other words, information theory nor evolutionary theory refute Darwinian evolution and in fact, information theoretical approaches show how CSI can trivially arise under processes of variation and selection.

Information content with and without selection

It’s time for ID to stand up and teach the controversy, not make up one where there does not exist one, at least not from a scientific perspective. Information theory is no enemy of Darwinian evolution, on the contrary.

Categories:

27 Comments

Is Sal arguing that the efficiency of a framastat is inversely proportional to the Finagle factor squared?

I find that hard to believe theoretically or empirically! Obviously, Sal has forgotten to factor in the cosine of the angle of the dangle and the effect that has on a framastat when the Moon is in the Second House and Jupiter aligns with Mars.

Sal, Sal, Sal, what are we to do with you? If you don’t eat your meat you can’t have any pudding. How can you have pudding if you don’t eat your meat?

I came to realize a while ago that many UD moderators are “rubbermaid critics”. You can nail them down and prove a point to them, but give them a few days, they’ll forget the whole thing, and they’ll just “pop” right back to spouting the same nonsense you actually proved to be false. (Someone else said it was like punching water - no matter how hard you drive your point home, it will leave no lasting impression.) Anyway, I proved that random mutations can produce information a while back on one of the UD threads. Am I surprised that they so quickly forget it? Of course not.

I remember hearing about evolutionary algorithms a long time ago, but it seems strange that it never seems to have really come to people’s homes and workplaces.

There may be lots of ways how simple computer programmes using these algorithms might make life easier for people, from planning shopping days, to making duty rosters, or even, according to the talk origins page, playing the stock market.

I long for the day that evolutionary algorithms become more widely used - because then people could see for themselves how RM&NS in their dumb machines can beat their own intelligent designs.

Who knows, they might even see the parallels with theistic evolution - “ I used mutation and selection to make something useful, a convenient method used by the Lord since 4 Billion B.C.”

Speaking of GA’s and natural selection, here are some links I posted on UD that show a simple experiment for GAs searching over the english language for valid words. They appear to provide a several order of magnitude speed up over blind search…

“I remember hearing about evolutionary algorithms a long time ago, but it seems strange that it never seems to have really come to people’s homes and workplaces.”

Routine evolutionary computation is definitely coming. Not only is the technology maturing, but computers are getting fast enough to really make it practical.

In ten years the automated generation of programs using GP and other similar evolutionary computation techniques will be a common part of software engineering. What else are you going to do with your 32-core 64-bit microprocessor with 32gb of RAM?

This “CSI cannot be created by evolutionary processes” stuff is going to sound awfully silly and quaint. It already sounds silly and quaint to me, as I work in this area myself and am presently in the process of launching a company around EC technology.

I remember hearing about evolutionary algorithms a long time ago, but it seems strange that it never seems to have really come to people’s homes and workplaces.

They’re used regularly in electrical engineering.

As I understand it, many of the high-end packages used for mapping circuits into field-programmable logic devices use a random-walk iterative method sharing many features with what we call evolutionary algorithms.

These programs were developed this way because the sheer number of possible fits tended to mitigate against a straight line “optimize by looking at all possible combinations” method.

There may be lots of ways how simple computer programmes using these algorithms might make life easier for people, from planning shopping days, to making duty rosters, or even, according to the talk origins page, playing the stock market.

I don’t know about you, but I rarely ask my computer to optimize a part of my life. The industrial applications of EC are already here, such as optimizing jet engine designs.

As for playing the market, we just had a little talk about that on GP-List (now a Yahoo group, actually). The market is just a bit harder to optimize against than a lot of folks think!

ID slogan was not supposed to be “teach the controversy.” That was a mistake at the printers. It was supposed to be “Contrary to teaching…”. However due to limited funding (because all of the evolutionist are getting the all of the big fat government grants) they were unable to afford to make the correction. So all this hoolpa over them spreading misinformation and ignorance is a mute point, since their original goal was to be a source contrary to the process of enlightening.

Oh man!!!! just when you thought they cannot sink any lower.….
Sal’s statement reminds me of the assertion that bumble bees should not be able to fly, they are aerodynamically not suited for flying. And that was proved mathematically!!!!!

Well, I guess that life just like bumble bees doesn’t know anything about either mathematics and/or thermodynamics!!!

Why don’t you evolutionists put a realistic mutation rate and genome length in Dr Schneider’s ev model and see how many generations it takes for the the binding sites to evolve. The rate of accumulation of information is far too slow to explain macroevolution by random point mutations and natural selection.

Why don’t you evolutionists put a realistic mutation rate and genome length in Dr Schneider’s ev model and see how many generations it takes for the the binding sites to evolve. The rate of accumulation of information is far too slow to explain macroevolution by random point mutations and natural selection.

These are non-sensical objections. First of all, the program is available for anyone to put to use, so if there are some problems with mutation rates etc, how come that other than making such claims, creationists seem to be unable to do the analysis. Second of all, what Schneider has shown is that Dembski’s CSI can evolve naturally and thus CSI is not a good indicator of design.

As far as Gould’s hypothesis is concerned, I fail to see how it contradicts Schneider’s work, unless it is based on the flawed comprehensions of Gould’s work, so typical amongst evolution deniers.

PvM Posted
*These are non-sensical objections. First of all, the program is available for anyone to put to use, so if there are some problems with mutation rates etc, how come that other than making such claims, creationists seem to be unable to do the analysis. Second of all, what Schneider has shown is that Dembski’s CSI can evolve naturally and thus CSI is not a good indicator of design.

As far as Gould’s hypothesis is concerned, I fail to see how it contradicts Schneider’s work, unless it is based on the flawed comprehensions of Gould’s work, so typical amongst evolution deniers.*

I know the program is available on-line. I was invited by Dr Schneider to examine the program. I have done extensive parametric studies with the model and I am discussing these results with evolutionists on the following two forums:

Unlike Dembski, I believe that ev is a plausible model of random point mutations and natural selection. What my contention is that in order to get acceptable results from ev, you have to use realistic input parameters (such as mutation rates and genome lengths.)

I know that IDers have long criticized Dr Schneider’s published results from ev for his use of an unrealistically small genome length and unrealistically large mutation rate, however I don’t know why nobody plugged realistic values in the model to see what the results would show.

Dr Schneider is not willing to discuss these results from his model publicly but Paul Anagnostopoulos, Dr Schneider’s java programmer is discussing these results.I am not interested in starting a third discussion on this topic but if you want to try and understand you can visit the above two urls.

I know the program is available on-line. I was invited by Dr Schneider to examine the program. I have done extensive parametric studies with the model and I am discussing these results with evolutionists on the following two forums:

Unlike Dembski, I believe that ev is a plausible model of random point mutations and natural selection. What my contention is that in order to get acceptable results from ev, you have to use realistic input parameters (such as mutation rates and genome lengths.)

So I assume you agree that evolutionary algorithms can in fact generate complex specified information, contrary to Dembski’s claims and in line with Schneider’s arguments?

I know that IDers have long criticized Dr Schneider’s published results from ev for his use of an unrealistically small genome length and unrealistically large mutation rate, however I don’t know why nobody plugged realistic values in the model to see what the results would show.

Begging the question. Perhaps you mean nobody refers to IDers who criticized Schneider?

Dr Schneider is not willing to discuss these results from his model publicly but Paul Anagnostopoulos, Dr Schneider’s java programmer is discussing these results.I am not interested in starting a third discussion on this topic but if you want to try and understand you can visit the above two urls.

In other words, a drive-by shooting unwilling and/or unable to support his claims. Next time, if you have to make some additional comments, please have the decency to defend them?

PvM complains “In other words, a drive-by shooting unwilling and/or unable to support his claims. Next time, if you have to make some additional comments, please have the decency to defend them?”

Since you know how to access Dr Schneider’s ev model, do the following: Take his baseline case using the default input values and check either the Perfect Creature or Rs>=Rf check box. Then run his case until it converges. Record the number of generations. Then increase the genome length from 256 to 512 and run that case and record the generations required for convergence. Continue to increase the genome length to 1024, 2048, 4096 and so on and observe what happens to the generations required for convergence.

Dr Schneider drew his conclusions on the evolution of a human genome based on a nonexistent genome length and mutation rate.

You can do a similar set of cases to investigate what happens with the generations for convergence when you use a realistic mutation rate. Dr Schneider used a mutation rate of 1 mutation per 256 bases per generation which is not seen in any living thing. Try using a mutation rate of 1 mutation per 1,000,000 bases per generation. This is in the observed, measured range of mutation rates seen in prokaryotes and see how this affects the generations for convergence.

Now we are getting somewhere but before we continue could you let me know if you agree/disagree with the fact that Schneider’s work shows that CSI is not a reliable detector of design?

You experiment suggests increasing the length of the genome. Are all other parameters kept a constant? What about mutations?

Of course mutation rates will affect the convergence times, what Schneider’s work shows that when it converges it converges to R being close to the predicted R_f.

Did you know that Rsequence would evolve to Rfrequency when you first ran the program? No, I ran the program to see if this would happen or not. I was testing my PhD thesis. If the program had failed, my thesis would have been in jeopardy!

Schneider’s work showed that 1) convergence of R to the predicted R_f can take place under processes of selection and variation 2) that such processes can indeed generate CSI.

What about increasing the population size for instance?

Won’t a slower evolution take too long in nature? No. For practical reasons we usually use a tiny population in Ev, generally only 16 organisms. In nature there are usually populations of millions. For example, in the lab a single cubic centimeter (ml, a milliliter) of E. coli culture can easily contain 10^8 bacteria. (That’s 100 million.) With an error rate of one in 10^6 (i.e., one in a million) at each genetic location, there will be plenty of variation to drive evolution. Notice that we have 6 billion people on the planet, so there is lots of opportunity for us to continue evolving. (Have you been wearing your seatbelt? People who don’t wear seatbelts are being selected against …)

If now the argument is that CSI can indeed be generated but under these processes it would take too long then we have to ask us the following question: How does ID explain the existence of CSI versus how would science go about answering these questions?
For instance, science may wonder if the simulations match reality sufficiently to be of relevance to larger genomes?

Remember Behe and Snoke’s work which when applied to a ton of soil would give enough opportunity for the required mutations to arise?

As to why the work contradicts Gould’s punctuated equilibria, I have yet to see any evidence of this.

1 mutation per generation and small populations… Of course the time to complete will increase. and as others have pointed out to you, the program ignores many relevant aspects of evolution. Nevertheless, it shows how selection and variation can in principle explain the information in the genome. The question now becomes, how does this compare to ID predictions?

A previous simulation study examined how long it would be expected to take for new binding sites to arise in a regulatory sequence via point mutations (Stone and Wray 2001). Here the authors considered a sequence of DNA evolving in a random neutral walk by point mutation, and observed how long it would take to find a particular binding site for a transcription factor. They showed, for example, for Drosophila, that to find two 6-bp transcription factor binding sites in a 200-bp sequence would take around 54,000 years, which seems reasonable given the rate at which Drosophila may evolve its gene expression patterns. However, the corresponding time for humans, with their longer generation times, was over 13 million years, which seems too slow a rate to be likely to be useful in evolutionary change.

The study in question “Rapid Evolution of cis-Regulatory Sequences via Local Point Mutations” by Jonathon R. Stone and Gregory A. Wray published in Molecular Biology and Evolution 18:1764-1770 (2001) shows how neutrality may be sufficient by itself to explain the evolution of binding sites

Although the evolution of protein-coding sequences within genomes is well understood, the same cannot be said of the cis-regulatory regions that control transcription. Yet, changes in gene expression are likely to constitute an important component of phenotypic evolution. We simulated the evolution of new transcription factor binding sites via local point mutations. The results indicate that new binding sites appear and become fixed within populations on microevolutionary timescales under an assumption of neutral evolution. Even combinations of two new binding sites evolve very quickly. We predict that local point mutations continually generate considerable genetic variation that is capable of altering gene expression.

I believe the problem with your argument is that you are looking at the genome as being totally random and that we have to evolve a binding site anywhere in the genome. Perhaps more realistic models are needed?