Advertisements

But what if the CIO is herself a chimp?

Blogger Wintery Knight, a programmer by day, comments on neo-Darwinism’s view of how information gets encoded:

Imagine a materialist CIO who thought that code was written by large numbers of monkeys pounding at keyboards instead of by engineers. He would be firing all the software engineers and replacing them with monkeys in order to generate better code. And he would call this method of generating new code “science”. It’s the scientific way of generating new information, he would say, and using software engineers to generate new code isn’t “science”. It’s what he learned at UC Berkeley and UW Madison! His professors of biology swear that it is true!

It seems to me that there are incentives in place that make it impossible for Darwinists to discuss their materialistic religion honestly. The feel pressured to distort the evidence in the public square, and there are political pressures on them to distort the evidence in order to avoid being censured by their employers and colleagues. When questions about the evidence for Darwinism come up, they have to rally around their religion and chant the creeds that comfort them. There can be no questioning of the faith in the presupposition of materialism.

AMW, by “complex designs,” are you referring to something which corresponds to “complex specified information”? If so, has it been established that random change + selection can indeed produce complex designs?

When Theory and Experiment Collide — April 16th, 2011 by Douglas Axe
Excerpt: Based on our experimental observations and on calculations we made using a published population model [3], we estimated that Darwin’s mechanism would need a truly staggering amount of time—a trillion trillion years or more—to accomplish the seemingly subtle change in enzyme function that we studied.http://biologicinstitute.org/2.....t-collide/

Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology
Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution.http://www.evolutionnews.org/2.....s_wro.html

A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA – David J D’Onofrio1, Gary An – Jan. 2010
Excerpt: It is also important to note that attempting to reprogram a cell’s operations by manipulating its components (mutations) is akin to attempting to reprogram a computer by manipulating the bits on the hard drive without fully understanding the context of the operating system. (T)he idea of redirecting cellular behavior by manipulating molecular switches may be fundamentally flawed; that concept is predicated on a simplistic view of cellular computing and control. Rather, (it) may be more fruitful to attempt to manipulate cells by changing their external inputs: in general, the majority of daily functions of a computer are achieved not through reprogramming, but rather the varied inputs the computer receives through its user interface and connections to other machines.http://www.tbiomed.com/content/7/1/3

“I have seen estimates of the incidence of the ratio of deleterious-to-beneficial mutations which range from one in one thousand up to one in one million. The best estimates seem to be one in one million (Gerrish and Lenski, 1998). The actual rate of beneficial mutations is so extremely low as to thwart any actual measurement (Bataillon, 2000, Elena et al, 1998). Therefore, I cannot …accurately represent how rare such beneficial mutations really are.” (J.C. Sanford; Genetic Entropy page 24) – 2005

“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010
Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net ‘fitness gain’ within a ‘stressed’ environment i.e. remove the stress from the environment and the parent strain is always more ‘fit’)

The fact that random change + [artificial, algorithmic] selection [matched to a specified fitness metric on the space of possibilities in an island of function set up by designers of the relevant GA program] can produce complex designs doesn’t mean it’s the fastest or most efficient way of doing so.

AMW, please point me to the exact paper and test that shows a violation of Genetic Entropy, by bacteria generating more functional complexity than was already present in the parent strain, by passing ‘the fitness test';

Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology
Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution.http://www.evolutionnews.org/2.....32491.html

AMW, something tells me you are going to have a extremely difficult time showing a gain in functional complexity over and above what was already present in the bacteria;

Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information – David L. Abel and Jack T. Trevors – Theoretical Biology & Medical Modelling, Vol. 2, 11 August 2005, page 8
“No man-made program comes close to the technical brilliance of even Mycoplasmal genetic algorithms. Mycoplasmas are the simplest known organism with the smallest known genome, to date. How was its genome and other living organisms’ genomes programmed?”http://www.biomedcentral.com/c.....2-2-29.pdf

The Law of Physicodynamic Insufficiency – Dr David L. Abel – November 2010
Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.”http://www.scitopics.com/The_L.....iency.html

2: The gigo/qiqo principle, i.e the quality of the o/p- is dependent on what was put in, as input data, as algorithms and their data structure and coding, as underlying logic and assumptions in the algorithms. For good or bad.

3: Was it Brillouin who pointed out the obvious: computers transform information,they do not create it.

4: in this case, the machine , the operating system and the working software of the application all work together to put you on an island of function. All intelligently input.

5: In particular the GA works by setting up a map of a space of configs in some co-ordinate space that have mapped values of some objective function, that has a nice slope behaviour i.e it has trends that tell you accurately warmer/colder. [There are perverse functions that do not do that, e.g having a lot of jump discontinuities from v good to v bad performance. No hills to climb only cliffs with no talus slopes at the foot.]

6: All of this again is set up.

7: Start at some point in the space,then run a scattered ring or higher order analogue to a ring, e.g a sphere or a multidimensional object similar to a sphere of test points.

8: Pick the new points for starting from as those that are most promising, i.e we have algorithmic selection dependent on the nice behaviour of the objective function.

9: climb, to at least a local maximum.

10: this is dependent on the mapped function for the capacity and on the algorithm. That is where the info is coming from, the samples just are a directed cross section of the space.

11: To see more details cf the case here using the Mandelbrot set (watch the pics and the videos).

12: in short we have here a model of micro-evolution within an island of already defined complex function, But micro-evo is accepted by everybody including young earth creationists.

13: The real challenge is to search in the wider space of DNA sequences and reach such islands of function without intelligent direction and before there is function so there can be differences of function to hill climb on differential success.

14: In other words, we have to get to embryologically feasible body plans with DNA coding for new proteins to make tissues and regulatory networks controlling development from the fertilised cell.

15: Dozens of times over, with 10+ million bits of info that has to work right the first time or there is no new life form to grow up and have a chance to strut its superior performance and reproduce.

16: Starting with the first life form,and ending with us with our linguistic capacity.

17: The ONLY empirically observed source of FSCI is design. And on the infinite monkeys type analysis of config spaces, it is maximally unlikely that such FSCI will come about by chance even once, starting at about 1,000 bits of info. Which is ridiculously small for a control program: 125 bytes.

18: So, carefully watch how controlled random changes in a complex functioning system are used to effect hill climbing on a metric that was intelligently loaded in and with a picking algorithm on the successive groups of samples that was also intelligently designed.

19:ARTIFICIAL selection within an island of intelligently set up pre-existing function is not to be equated to the real complex biofunction origination problem.

When Theory and Experiment Collide — April 16th, 2011 by Douglas Axe
Excerpt: Based on our experimental observations and on calculations we made using a published population model [3], we estimated that Darwin’s mechanism would need a truly staggering amount of time—a trillion trillion years or more—to accomplish the seemingly subtle change in enzyme function that we studied.http://biologicinstitute.org/2.....t-collide/

The Paradox of the “Ancient” Bacterium Which Contains “Modern” Protein-Coding Genes:
“Almost without exception, bacteria isolated from ancient material have proven to closely resemble modern bacteria at both morphological and molecular levels.” Heather Maughan*, C. William Birky Jr., Wayne L. Nicholson, William D. Rosenzweig§ and Russell H. Vreeland ;http://mbe.oxfordjournals.org/...../19/9/1637

These following studies, by Dr. Cano on ancient bacteria, preceded Dr. Vreeland’s work:

“Raul J. Cano and Monica K. Borucki discovered the bacteria preserved within the abdomens of insects encased in pieces of amber. In the last 4 years, they have revived more than 1,000 types of bacteria and microorganisms — some dating back as far as 135 million years ago, during the age of the dinosaurs.,,, In October 2000, another research group used many of the techniques developed by Cano’s lab to revive 250-million-year-old bacteria from spores trapped in salt crystals. With this additional evidence, it now seems that the “impossible” is true.”http://www.physicsforums.com/s.....p?t=281961

Revival and identification of bacterial spores in 25- to 40-million-year-old Dominican amber
Dr. Cano and his former graduate student Dr. Monica K. Borucki said that they had found slight but significant differences between the DNA of the ancient, 25-40 million year old amber-sealed Bacillus sphaericus and that of its modern counterpart,(thus ruling out that it is a modern contaminant, yet at the same time confounding materialists, since the change is not nearly as great as evolution’s ‘genetic drift’ theory requires.)http://www.sciencemag.org/cgi/...../5213/1060

Dr. Cano’s work on ancient bacteria came in for intense scrutiny since it did not conform to Darwinian predictions, and since people found it hard to believe you could revive something that was millions of years old. Yet Dr. Cano has been vindicated:

“After the onslaught of publicity and worldwide attention (and scrutiny) after the publication of our discovery in Science, there have been, as expected, a considerable number of challenges to our claims, but in this case, the scientific method has smiled on us. There have been at least three independent verifications of the isolation of a living microorganism from amber.”http://www.uncommondescent.com.....ent-357693

In reply to a personal e-mail from myself, Dr. Cano commented on the ‘Fitness Test’ I had asked him about:
Dr. Cano stated: “We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative “ancient” B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate.”:
Fitness test which compared ancient bacteria to its modern day descendants, RJ Cano and MK Borucki

Thus, the most solid evidence available for the most ancient DNA scientists are able to find does not support evolution happening on the molecular level of bacteria. In fact, according to the fitness test of Dr. Cano, the change witnessed in bacteria conforms to the exact opposite, Genetic Entropy; a loss of functional information/complexity, since fewer substrates and fatty acids are utilized by the modern strains. Considering the intricate level of protein machinery it takes to utilize individual molecules within a substrate, we are talking an impressive loss of protein complexity, and thus loss of functional information, from the ancient amber sealed bacteria. Here is a revisit to the video of the ‘Fitness Test’ that evolutionary processes have NEVER passed as for a demonstration of the generation of functional complexity/information above what was already present in a parent species bacteria:

A very interesting point, and one that passed across my mind overnight.

At machine language level, what is happening in a computer is that in simple terms, it has a perpetually repeated instruction cycle:

. . . Fetch –> Decode –> Execute . . .

At this level, an instruction is a bit pattern that — once fetched to the instruction register (and once augmented by reference information, either immediate or implied or pointed to elsewhere) — triggers either a network of gates and flipflops [the old fashioned, hard wired machine] or calls a lower level of program called microcode. Let’s speak in the context of microcode.

From the bit pattern — this is the decode part, a sequence of steps — the execute — is taken, perhaps with decision nodes based on flag bits and their values 1/0. At these points a pre-programmed decision is made to branch one way or another. All along the machine is simply manipulating bits per its functional organisation (which is of course intelligently designed).

At the close of the microcode routine for a given instruction, it goes back to the fetch cycle. At no point is the computer doing any more than blindly carrying out pre-programmed bit manipulation and electrical circuit steps designed by intelligent designers.

Back up at the machine code/assembly language level — Assembly Language is in effect a rendering of the bit patterns into abbreviated somewhat English like terms that say what the machine code bit patterns do:

ADDA R1, R2

Means add contents of registers or possibly memory locations R1 and R2 and put them in accumulator register A.

These operations are carried out using logic circuits that are in effect electrically controlled switches that are so organised in series or parallel that they control output voltage levels.

If Sw1 and SW 2 were in parallel, if either SW1 OR SW 2 or both were closed, the circuit would complete and he bulb would go on, Let’s try to rep that:

— SW2:/ —
+ –|– SW1:/ –|- Bu(off) — 0

Electronically, transistors can be operated as switches, and with load resistors, flowing currents [thanks to Ohm’s law V = iR) can be converted into output voltages. By convention, Hi = logic 1 or True, and Lo = logic 0 or False.

The two natural logic gates are NAND and NOR, with a load resistor below the Rail supply, then an output take off point, and the transistor switches below, in series or parallel. Input circuits control the transistors so if i/p –> 1, SW –> Closed. When the load network has no current flowing, o/p is hi or 1, and if tehre is a path to run current to 0, o/p will be low.

In Boolean Algebra it can be shown that NAND or NOR can represent any combination of relevant logic gates. From these the various logic operations of interest, usually AND, NOT, OR, and XOR, can be constructed. By using digital o/p feedback, we can get circuits with a memory, known as the R/S latch or flipflop. Chains of such flipflops can be made into counters, registers, and the like, allowing for storage, counting, timing etc etc. memories are usually made form special circuits that allow a cheap, small Silicon footprint implementation of storage registers.

From these we construct a computer such that the machine has address, data and control buses, interfaces to the processor, which has internal storage registers and an instruction execution unit that has in it an Arithmetic and Logic operations unit: ALU. There is usually a FLAG register where bits indicating special conditions are kept and used to trigger execution along paths depending on whether they are 1/0, i.e. we see here where that branching on Yes/No comes from.

Initialising, sequencing, branching and looping with branching then terminating, are the heart of how programs work. Inputs are taken in, are processed according to the stored instructions [usually translated down from higher level languages such as JAVA or BASIC etc] and the outputs are issued. Keyboards, mice, mikes, cameras, scanners, printers, loudspeakers and visual display units are special i/p and o/p devices communicated to by passing data back and forth, often in 8-bit chunks called bytes. 16 and 32 bits are also now commonly used.

In short, the machine is an intelligently organised complex functional information rich network of nodes, arcs and interfaces, intelligently designed to blindly effect bit manipulations.

Going to the next level, chains of instructions and inputs along a timeline as the user interfaces with the machine, lead to the I–> P –> O cycle that we are familiar with.

When a program calls for “chance” inputs, there may be a pseudo-random bit generator or there may be a hardware interface to a device that generates genuinely random numbers. Most often the former, so even the “random” numbers are usually not so random after all.

Now, the machine is processing i/ps to give rise to o/ps based on its programs and stored information. All, intelligently designed and “canned” so to speak. the result is a useful transformation of i/ps to o/ps under intelligent control and based on the intelligent design of the system.

When something like Weasel comes along and claims that information is being generated out of the thin air of random number sequences filtered by a selection process, we need to take a closer look. Weasel in fat has a “map” of a space of possible configs, where the total is about 10^40. This is of course within trial and error search capacity of modern PCs, and certainly that of the raw processing power of the cosmos. But what is crucial is that there is a way of measuring distance to target, basically off difference from the target value. This allowed Dawkins to convert the whole space of 10^40 possibilities into a warmer/colder map, without reference to whether or not the “nonsense phrases” had any meaning in themselves.

At each stage, a ring of random changes in a seed phrase was made, and this was then compared with the seed itself and the warmest was picked to move to the next step. And so on until in about 40 – 60 generations in the published runs, the phrase sequence converged on the target.

VOILA! Information created out of chance variation and selection!

NOT.

The target was already preloaded all along. In this case, quite explicitly.

More sophisticated GA’s do not load the target(s) EXPLICITLY, but do so implicitly.

They have an intelligently designed well-behaved “fitness function” or objective function — one that has to have helpful trends pointing towards desired target zones — that is relevant to whatever you are trying to target that is mapped to the config space for the “genetic code” string or the equivalent; INTELLIGENTLY mapped, too.

Notice, all the neat little sales words that suggest a parallel to the biological world that is less real than appears.

Then, when the seed genome value or config is fed in, it is tested for some figure of merit that looks at closeness to target or to desired performance.

A ring of controlled random/pseudo-random samples is tossed out. the one or ones that trend warmest are picked and the process repeats. Eventually, we find somewhere where changes don’t make for improvement. We are at a target and voila, information out of the thin air of random variation and selection.

NOT.

Again, look at ho=w much intelligently designed work was put in to set up the island of function to wander around in on a nice slope and detect warmer/colder, to move towards a local peak of performance or “fitness.” No nice fitness landscape and no effective progress. No CONTROL on the degree of randomness in the overall system, and chaos would soon dominate. No warmer/colder metric introduced at just the right times on that nice slope function, and you would wander around blindly.

In short, the performance is impressive but the prestidigitation’s impact requires us to be distracted from the wires, curtains, smoke puffs, and trap doors under the stage.

So, when we see claims that programs like avida, ev, tierra etc produce arbitrary quantities of shannon or even functionally specific information, we need to highlight the intelligently designed apparatus that makes this impressive performance possible.

As the Mandelbrot example of yesterday showed, it is the careful design of the system that produces the result, not the chance sampling and the warmer/colder oracle that tells how to progress on a nice fitness function that gives reliable trends.

Too often, in the excitement over how wonderfully evolution has been modelled and how successful it is, this background of intelligent design and the way it drives the success of such exercises is forgotten.

We tend to see what we expect to see or want to see. So, we have to be doubly careful to examine the sort of critical assessment above.

When we do so, teh result is that the GA’s model how already rfucntioning genomes and life forms with effective body plans may adapt to niches in their environments, and to shifts in their environment. given the way that there is so much closely matched co-adaptation of components above — i.e the dreaded irreducible complexity appears, if there is a drastic shift in environment, this explains why there is then a tendency to vanish from the annals of life.

In short, we see something that fits the actual dominant feature of the fossil record: sudden appearance as a new body plan is introduced, stasis with a measure of adaptation and relatively minor variations within the overall body plan — that are explained on relatively minor mutations [especially on regulatory features such as size and proportions], disappearance or continuity into the modern world.

And, Ilion’s point is still standing: in our observation, even through a machine that “cans” it, information transformation comes from mind.

It’s great to explain GA properly and what it can and cannot do. I think it will open up eyes of many readers and it’s good direction. GA cannot be treated as some magical entity which creates information out of nowhere. I dismissed them during the peak of Mathgrrl challenge and described them as “taking horse to the water AND make him drink” elsewhere.

Gene duplication example given by challenger is a different story but we have to keep in mind DNA duplication (replication) is a function output of a specialized process inside the cell. That process we could examine in detail if needed.

It is when the term ‘evolution’ is used in the twisted way that Darwin’s Heirs have mangled the word. When used at it was originally meant — and that meaning being the reason Darwin avoided using the word — then “evolutionary algorithm” in not an oxymoron.

KF: “… VOILA! Information created out of chance variation and selection!”

Did not the mathematician Gregory Chiatin demolish that idea years and years ago, proving mathematically (though, mere common sense ought to suffice) that no execution of a computer program can ever generate a result which is not fully specified by the logic of the program acting upon the specific inputs of that specific execution?

[And I, while certainly no genious nor mathematician, saw through the charade years before I’d heard of Chiatin]

in the inputs, the stored data, the data structures, the algorithms, the code [and the underlying coding language that has to be set up right too] as well as the logic, models and assumptions that are fed in.

Thanks for the BIG MAME to put on it. Do you have a book or paper or theorem name to link to The Name?

GEM of TKI

*PS: My coinage to give a positive, encouraging form; get the quality right and the results will take care of themselves. I suggest the pronunciation: “KWEE-KWOH”

I think many involved in computer programing and interested in biology are in unique position. They can bring forward new ideas and understanding of processes inside the cell. Field of bio informatics will provide us with the most incredible surprises in near future. Reading on cell bio-chem processes and combining them with my field of automation programming similarities become obvious.
Logic operations can be performed by electronic and mechanical devices. Why not chemical assemblies?