Sanford's "genomic (mutational) meltdown" scenarios are a hoot. Even DaveScot was bright enough to see that Sanford's proposed mutation rates were out of line with reality: fast-reproducing sexual species that have existed a few million should have all been extinct by now, but they're not. Sanford inflates deleterious mutation rates and disregards compensatory mechanisms.

His argument is a little more involved than that. It seems to revolve around genome size; the smaller genome size of something like P.falciparum prevents genetic meltdown, but it would occur with larger genome sized mammals. So genetic entropy is a problem for the latter (if not on Sanford YEC timescales). You can't just take fast reproducing things like P.falciparum and apply the Genetic Entropy failure in this case widely. At least that's how I read it.

Quote

It occured to me recently that Sanford’s projected rate of genetic decay doesn’t square with the observed performance of P.falciparum. P.falciparum’s genome is about 23 million nucleotides. At Sanford’s lowest given rate of nucleotide copy errors that means each individual P.falciparum should have, on average, about 3 nucleotide errors compared to its immediate parent. If those are nearly neutral but slightly deleterious mutations (as the vast majority of eukaryote mutations appear to be) then the number should be quite sufficient to cause a genetic meltdown from their accumulation over the course of billions of trillions of replications. Near neutral mutations are invisible to natural selection but the accumulation of same will eventually become selectable. If all individuals accumulate errors the result is decreasing fitness and natural selection will eventually kill every last individual (extinction). Yet P.falciparum clearly didn’t melt down but rather demonstrated an amazing ability to keep its genome perfectly intact. How?

After thinking about it for a while I believe I found the answer - the widely given rate of eukaryote replication errors is correct. If P.falciparum individuals get an average DNA copy error rate of one in one billion nucleotides then it follows that approximately 97% of all replications result in a perfect copy of the parent genome. That’s accurate enough to keep a genome that size intact. An enviromental catastrophe such as an ice age which lowers temperatures even at the equator below the minimum of ~60F in which P.falciparum can survive would cause it to become extinct while genetic meltdown will not. Mammals however, with an average genome size 100 times that of P.falciparum, would have an average of 3 replication errors in each individual. Thus mammalian genomes would indeed be subject to genetic decay over a large number of generations which handily explains why the average length of time between emergence to extinction for mammals and other multicelled organisms with similar genome sizes is about 10 million years if the fossil and geological evidence paints an accurate picture of the past. I DO believe the fossil and geological records present us with an incontrovertible picture of progressive phenotype evolution that occured over a period of billions of years. I don’t disbelieve common ancestry and phenotype evolution by descent with modification - I question the assertion that random mutation is the ultimate source of modification which drove phylogenetic diversification.

OK, why are there still Amoeba dubia around? I haven't found an explicit statement of average generation time for the species, but it is likely on the order of 24 hours based on generation times for other amoebae. Its genome is about 670 billion base pairs. That would seem to qualify as a large genome, wouldn't it?

OK, why are there still Amoeba dubia around? I haven't found an explicit statement of average generation time for the species, but it is likely on the order of 24 hours based on generation times for other amoebae. Its genome is about 670 billion base pairs. That would seem to qualify as a large genome, wouldn't it?

By the way, all this genetic entropy (why the stupid name, why not just Muller's Ratchet?) stuff relates to the work of Laurence Loewe at Edinburgh. He's done a lot of research on Muller's Ratchet, well worth checking out:

I'm not a population geneticist or indeed any kind of evolutionary biologist whatsoever. But it's my impression that Sanford is saying nothing new; he's just trying to repackage issues that pop gen people have known about for decades. Indeed, occasional creationist basher Joe Felsenstein published one of the classic papers in this respect:

Some time ago on PandasThumb, Felsenstein said he'd probably better read the Sanford book as creationists would be using it. S Cordova offered to send it to him. It'd be great to get his thoughts. I think this is the discussion:

Sanford's "genomic (mutational) meltdown" scenarios are a hoot. Even DaveScot was bright enough to see that Sanford's proposed mutation rates were out of line with reality: fast-reproducing sexual species that have existed a few million should have all been extinct by now, but they're not. Sanford inflates deleterious mutation rates and disregards compensatory mechanisms.

His argument is a little more involved than that. It seems to revolve around genome size; the smaller genome size of something like P.falciparum prevents genetic meltdown, but it would occur with larger genome sized mammals. So genetic entropy is a problem for the latter (if not on Sanford YEC timescales). You can't just take fast reproducing things like P.falciparum and apply the Genetic Entropy failure in this case widely. At least that's how I read it.

Well, Wes mentioned one example of "large" - genomed rapidly-reproducing species, and there's a lot more available. Mammal genomes average between 2 and 3 gigabases (Gb) but lots of insect and plant genomes (like wheat) can be larger: around 16 Gb in wheat or grasshoppers (Podisma pedestris) -- five times larger than humans.

Nailing Sanford down on questions about interesting populations like california condors would be fun -- they're the only North American remnant of Gymnogyps, been around since the early Pleistocene and their population dropped down to 22 individuals not very long ago... and their est. genome size is 1.5 Gb. They should have accumulated enough deleterious mutations so that such a small closely-related group would produce nothin' but dead young, right? Or how about Przywalski's horse?

Sanford is a YEC of sorts, so he skewed his parameters to fit his skewed view of the Earth's entire biome being less than 100 K years old, as I recall ( I may be wrong with the exact figure there).

-------------------------------------------

ETA: I was curious about known recessives in the existing condors and there is one identified (chondrodystrophy) that results in fatal abnormalities :

Looks right to me, given your parameters. You're getting 10 mutations/individual for 100 individuals, or 1000 mutations per generation. Of those, 1/100,000 is beneficial, so you're only getting one beneficial mutation every 100 generations. Those are the tiny blips. Once in a while one or two of them drift up to an appreciable, and the mean number of beneficial alleles per individual climbs above 1.0.

None of them fix though, which is not surprising, since they're almost all effectively neutral. Which means that you should have one fixing by chance every 20,000 generations, plus some probability from the tail at higher selection coefficient.

Over at TWeb where this started I asked the same question; why haven't all the fast reproducing mammal species died out from genetic meltdown yet? The topic of mice was raised, because while mice have a genome roughly the size of humans (approx. 3 GB), they have a generation time some 170x faster (6 weeks vs. 20 years). So why haven't all the mice gone extinct by now?

I made the statement ""All other things being equal, the population that breeds faster will accumulate mutations faster."

Jorge Fernandez (a YEC who was acting as a go between to Sanford) supposedly forwarded my questions to Sanford and got this reply:

Sanford: " No, it is just the opposite, short generation times means more frequent and better selective filtering."

Which makes zero sense and is trivially easy to refute with their own program:

Run Mendel with two populations that are identical in every way (i.e genome size, mutation rate, selection pressure, etc.) except make one generation time 2x the other, say two per year year vs. one per year.

If you run them both for 1000 generations, both will end up with the same (lower) fitness level, but the two per year will only take 500 years to get there.

If you run them both for 1000 years, the once per year will end up in the exact same fitness as the first trial, but the two per year will have 2000 generations and end up with an even lower fitness level, if it doesn't just go extinct first.

These guys are busted, and they know they're busted. Now it's just a question of how far they can push this shit and how much money they can make before the errors become well known.

--------------"Science is what got us to the humble place weâ€™re at, and what hard-won progress we might realize comes from science, with ID completely flaccid, religious apologetics bitching from the sidelines." - Eigenstate at UD

Evolution is a quest for innovation. Organisms adapt to changing natural selection by evolving new phenotypes. Can we read this dynamics in their genomes? Not every mutation under positive selection responds to a change in selection: beneficial changes also occur at evolutionary equilibrium, repairing previous deleterious changes and restoring existing functions. Adaptation, by contrast, is viewed here as a non-equilibrium phenomenon: the genomic response to time-dependent selection. Our approach extends the static concept of fitness landscapes to dynamic fitness seascapes. It shows that adaptation requires a surplus of beneficial substitutions over deleterious ones. Here, we focus on the evolution of yeast and Drosophila genomes, providing examples where adaptive evolution can and cannot be inferred, despite the presence of positive selection.

there's a section on Muller's Ratchet:

Quote

Here, we argue for a sharpened concept of adaptive evolution at the molecular level. Adaptation requires positive selection, but not every mutation under positive selection is adaptive. Selection and adaptation always refer to a molecular phenotype depending on a single genomic locus or on multiple loci, such as the energy of a transcription-factor-binding site in our first example. This correlates the direction of selection at all loci contributing to the phenotype and calls for the distinction between adaptation and compensation. The infinite-sites approximation, which is contained in many population-genetic models, neglects such correlations and is therefore not optimally suited to infer adaptation [16] and [23]. Here, we address this problem by a joint dynamical approach to selection and genomic response in a genome with finite number of sites. In this approach, adaptive evolution is characterized by a positive fitness flux ?, which measures the surplus of beneficial over deleterious substitutions.

It is instructive to contrast this view of adaptive evolution with Muller's ratchet, a classical model of evolution by deleterious substitutions [53] and [54]. This model postulates a well-adapted initial state of the genome so that all, or the vast majority of, mutations have negative fitness effects. Continuous fixations of slightly deleterious changes then lead to a stationary decline in fitness (i.e. to negative values of ?). Similarly to the infinite-sites approximation, this model neglects compensatory mutations. In a picture of a finite number of sites, it becomes clear that every deleterious substitution leads to the opportunity for at least one compensatory beneficial mutation (or more, if the locus contributes to a quantitative trait), so that the rate of beneficial substitutions increases with decreasing fitness. Therefore, assuming selection is time-independent, decline of fitness (? < 0) is only a transient state and the genome will eventually reach detailed balance between deleterious and beneficial substitutions, that is, evolutionary equilibrium (? = 0). As long as selection is time-independent, an equilibrium state exists for freely recombining loci and in a strongly linked (i.e. weakly recombining) genome, although its form is altered in the latter case by interference selection [55] and [56]. Conversely, an initially poorly adapted system will have a transient state of adaptive evolution (? > 0) before reaching equilibrium. Time-dependent selection, however, continuously opens new windows of positive selection, the genome is always less adapted than at equilibrium and the adaptive state becomes stationary. Thus, we reach a conclusion contrary to Muller's ratchet. Because selection in biological systems is generically time-dependent, decline of fitness is less likely even as a transient state than suggested by Muller's ratchet: the model offers no explanation of how a well-adapted initial state without opportunities of beneficial mutations is reached in the first place.

As a minimal model for adaptive evolution, we have introduced the Fisher-Wright process in a macro-evolutionary fitness seascape, which is defined by stochastic changes of selection coefficients at individual genomic positions on time scales larger than the fixation time of polymorphisms (and is thus different from micro-evolutionary selection fluctuations and genetic draft). Time-dependence of selection is required to maintain fitness flux: the seascape model is the simplest model that has a non-equilibrium stationary state with positive ?. The two parameters of the minimal model (strength and rate of selection changes) are clearly just summary variables for a much more complex reality. The vastly larger genomic datasets within and across species will enable us to infer the dynamics of selection beyond this minimal model.

Sanford's "genomic (mutational) meltdown" scenarios are a hoot. Even DaveScot was bright enough to see that Sanford's proposed mutation rates were out of line with reality: fast-reproducing sexual species that have existed a few million should have all been extinct by now, but they're not. Sanford inflates deleterious mutation rates and disregards compensatory mechanisms.

His argument is a little more involved than that. It seems to revolve around genome size; the smaller genome size of something like P.falciparum prevents genetic meltdown, but it would occur with larger genome sized mammals. So genetic entropy is a problem for the latter (if not on Sanford YEC timescales). You can't just take fast reproducing things like P.falciparum and apply the Genetic Entropy failure in this case widely. At least that's how I read it.

Quote

It occured to me recently that Sanford’s projected rate of genetic decay doesn’t square with the observed performance of P.falciparum. P.falciparum’s genome is about 23 million nucleotides. At Sanford’s lowest given rate of nucleotide copy errors that means each individual P.falciparum should have, on average, about 3 nucleotide errors compared to its immediate parent. If those are nearly neutral but slightly deleterious mutations (as the vast majority of eukaryote mutations appear to be) then the number should be quite sufficient to cause a genetic meltdown from their accumulation over the course of billions of trillions of replications. Near neutral mutations are invisible to natural selection but the accumulation of same will eventually become selectable. If all individuals accumulate errors the result is decreasing fitness and natural selection will eventually kill every last individual (extinction). Yet P.falciparum clearly didn’t melt down but rather demonstrated an amazing ability to keep its genome perfectly intact. How?

After thinking about it for a while I believe I found the answer - the widely given rate of eukaryote replication errors is correct. If P.falciparum individuals get an average DNA copy error rate of one in one billion nucleotides then it follows that approximately 97% of all replications result in a perfect copy of the parent genome. That’s accurate enough to keep a genome that size intact. An enviromental catastrophe such as an ice age which lowers temperatures even at the equator below the minimum of ~60F in which P.falciparum can survive would cause it to become extinct while genetic meltdown will not. Mammals however, with an average genome size 100 times that of P.falciparum, would have an average of 3 replication errors in each individual. Thus mammalian genomes would indeed be subject to genetic decay over a large number of generations which handily explains why the average length of time between emergence to extinction for mammals and other multicelled organisms with similar genome sizes is about 10 million years if the fossil and geological evidence paints an accurate picture of the past. I DO believe the fossil and geological records present us with an incontrovertible picture of progressive phenotype evolution that occured over a period of billions of years. I don’t disbelieve common ancestry and phenotype evolution by descent with modification - I question the assertion that random mutation is the ultimate source of modification which drove phylogenetic diversification.

Jorge Fernandez at TWeb is in contact with Sanford. He just posted the following from Sanford:

Quote

Hi Jorge - I have been traveling ... The comment ... about "cooking the books" is, of course, a false accusation. The issue has to do with memory limits. Before a Mendel run starts it allocates the memory needed for different tasks. With deleterious mutations this is straight-forward - the upper range of mutation count is known. With beneficials it is harder to guess final mutation count - some beneficials can be vastly amplified. Where there is a high rate of beneficials they can quickly exhaust RAM and the run crashes. Wesley Brewer [one of the creators of Mendel] has tried to avoid this by placing certain limits - but fixing this is a secondary priority and will not happen right away. With more RAM we can do bigger experiments. It is just a RAM issue.

Best - John

This is in response to - "Wes Elseberry made a comment that I think could be a good title, 'Mendel's Accountantcooks the books." I assume that they're talking about the failure of the program to increase fitness when a high number of beneficial mutations are specified.

I guess Sanford et al would argue that this problem isn't a big issue, since there's never a case in which there are loads (e.g. 90%) of beneficial mutations. Deleterious or slightly deleterious are in the majority in reality, there's no RAM problem with these, and so the main conclusion they draw from Mendel is unaffacted by the problems shown with beneficial mutations. At least I guess that's what he'd say.

Sanford also says:

Quote

The fact that our runs crash when we run out of RAM is not by design. If someone can help us solve this problem we would be very grateful. We typically need to track hundreds of millions of mutations. Beneficials create a problem for us because they amplify in number. We are doing the best we can.

I would urge your colleagues [Heaven help me - John is under the impression that you people are my colleagues ... brrrrrrrr!] to use more care. In science we should be slow to raise claims of fraud without first talking to the scientist in question to get their perspective. Otherwise one might unwittingly be engaging in character assassination.

I guess Sanford et al would argue that this problem isn't a big issue, since there's never a case in which there are loads (e.g. 90%) of beneficial mutations.

No, the problem is quantitative and not qualitative. If the program doesn't handle the 90% case correctly, it isn't handling the 0.001% case correctly, either. And we know that v1.2.1 did not handle it correctly. If you are going around claiming to have produced an "accurate" simulation, you are on the hook for that.

The 90% case just makes the error blatantly obvious.

Speaking of hypocrisy, how careful is Sanford in not making sweeping generalizations about biologists having gotten things wrong?

As demonstrated in the two runs I did comparing the output of v1.2.1 and v1.4.1 on the very same configuration, v1.2.1 has a major error in its handling of beneficial mutations. This has nothing at all to do with memory limits; I also ran both with the default case, and the experimental case used in both merely changed the two parameters as specified by Zachriel above. The memory usage was under 130MB for all cases I ran; the memory I had was sufficient and the simulations ran to completion. Sanford either was given a garbled account of the issue or is deploying a meaningless digression as a response.

I guess Sanford et al would argue that this problem isn't a big issue, since there's never a case in which there are loads (e.g. 90%) of beneficial mutations.

No, the problem is quantitative and not qualitative. If the program doesn't handle the 90% case correctly, it isn't handling the 0.001% case correctly, either. And we know that v1.2.1 did not handle it correctly. If you are going around claiming to have produced an "accurate" simulation, you are on the hook for that.

The 90% case just makes the error blatantly obvious.

Speaking of hypocrisy, how careful is Sanford in not making sweeping generalizations about biologists having gotten things wrong?

Ok, thanks Wesley. I know nothing about programming, so a lot of what I have to say on realted subjects will be utter nonsense!.

I totally concur about Sanford's sweeping generalisations. He claims that Mendel's Accountant has "falsified" Neo-Darwinian evolution:

Quote

When any reasonable set of biological parameters are used, Mendel provides overwhelming empirical evidence that all of the “fatal flaws” inherent in evolutionary genetic theory are real. This leaves evolutionary genetic theory effectively falsified—with a degree of certainty which should satisfy any reasonable and open-minded person.

and

Quote

As a consequence, evolutionary genetic theory now has no theoretical support—it is an indefensible scientific model. Rigorous analysis of evolutionary genetic theory consistently indicates that the entire enterprise is actually bankrupt. In this light, if science is to actually be self-correcting, geneticists must “come clean” and acknowledge the historical error, and must now embrace honest genetic accounting procedures.

I have zero respect for anyone who provides such rhetoric, without actually submitting their claims to review by the scientific community. The very people they are lambasting. That is fundamentally dishonest.

Sam at TWeb has emailed Sanford to see if he will engage directly at that messageboard. Could be interesting.

Oh and an additional response from Sanford. This is an explanation as to why such low population sizes (1000) were used and how this doesn't affect their conclusions. In addition it's a response to the question of why mice (as an example of a pretty fast reproducing species) have not yet gone extinct.

Quote

Hi Jorge - Please tell these folks that I appreciate their interest in Mendel, and if they see certain ways we can make it more realistic, we will try and accommodate them.

Mendel is fundamentally a research tool, and so offers a high degree of user-specification. There is no inherently "realistic" population size - it just depends on what circumstance you wish to study. The default setting for population size is set at 1000 because it is convenient - whether you are using the Windows version on your laptop, or any other computer, you are less likely to run out of memory. We are proceeding to study population size and also population sub-structure. I believe larger populations should realistically be set up as multiple tribes with a given migration rate between tribes. Under these conditions we see little improvement with larger population sizes. But they are welcome to do bigger runs if they have the memory resources.

The mouse question is interesting. I think one would needto change various parameters for mouse - each species isdifferent. I would like to know the maximal (not minimal)generation time - do they know? This would define themaximal time to extinction. I have read that the pergeneration mutation rate is about an order of magnitudelower in mouse - which makes sense if there are fewer celldivisions in the generative cells between generations.I would be happy to do such experiments when I get theinput data.

Jorge Fernandez at TWeb is in contact with Sanford. He just posted the following from Sanford:

Quote

Hi Jorge - I have been traveling ...The comment...about "cooking the books" is, of course, a false accusation. The issue has to do with memory limits. Before a Mendel run starts it allocates the memory needed for different tasks. With deleterious mutations this is straight-forward - the upper range of mutation count is known. With beneficials it is harder to guess final mutation count - some beneficials can be vastly amplified. Where there is a high rate of beneficials they can quickly exhaust RAM and the run crashes. Wesley Brewer [one of the creators of Mendel] has tried to avoid this by placing certain limits - but fixing this is a secondary priority and will not happen right away. With more RAM we can do bigger experiments. It is just a RAM issue.

Best - John

This is in response to - "Wes Elseberry made a comment that I think could be a good title, 'Mendel's Accountantcooks the books." I assume that they're talking about the failure of the program to increase fitness when a high number of beneficial mutations are specified...[snip]

Sanford also says:

Quote

"The fact that our runs crash when we run out of RAM is not by design. If someone can help us solve this problem we would be very grateful. We typically need to track hundreds of millions of mutations. Beneficials create a problem for us because they amplify in number. We are doing the best we can. I would urge your colleagues [Heaven help me - John is under the impression that you people are my colleagues ... brrrrrrrr!] to use more care. In science we should be slow to raise claims of fraud without first talking to the scientist in question to get their perspective. Otherwise one might unwittingly be engaging in character assassination."

That's interesting, because the 2008 ICR "Proceedings of the Sixth International Conference on Creationism (pp. 87–98)." Has a "paper" by John Baumgardner, John Sanford, Wesley Brewer, Paul Gibson and Wally Remine.

Mendel represents an advance in forward-time simulations by incorporating several improvements over previous simulation tools... Mendel is tuned for speed, efficiency and memory usage to handle large populations and high mutation rates....We recognized that to track millions of individual mutations in a sizable population over many generations, effcient use of memory would be a critical issue – even with the large amount of memory commonly available on current generation computers. We therefore selected an approach that uses a single 32-bit (four-byte) integer to encode a mutation’s fitness effect, its location in the genome, and whether it is dominant or recessive. Using this approach, given 1.6 gigabytes of memory on a single microprocessor, we can accommodate at any one time some 400 million mutations...This implies that, at least in terms of memory, we can treat reasonably large cases using a single processor of the type found in many desktop computers today.

I await the actual achievement of these claims with un-bated breath. All emphases are mine.

Mutations are not beneficial, neutral, or detrimental on their own, nor is their contribution to fitness fixed for all time. Mutations contribute to fitness in a context, and as the context changes, so may the value of its contribution to fitness. Fitness is a value that applies to the phenotype in ensemble. Mendel's Accountant appears instead to assume that mutations have a fixed value that cannot be changed by context. Thus, Mendel's Accountant appears to completely ignore research on compensatory mutations.

Because the value of a mutation depends on context, a particular mutation may be beneficial, neutral, or detrimental at initial appearance, but later become part of a different class as other mutations come into play. Mendel's Accountant treats mutations as falling into a fixed class.

These faults alone suffice to disqualify Mendel's Accountant from any claim to providing an accurate simulation of biological evolution.

Of course, I tend to think that a good approach to critique of a program to do a particular task is to actually produce a program that does that task better. I think that is something that we could give some thought to here. Much of the same background work applies to analysis of MA or design of an alternative.

Some ideas:

- Develop a test suite based on published popgen findings in parallel with development

- Base it on the most general, abstract principles for broad applicability

- Aim for number of generations to be limited only by amount of disk or other long-term storage available

- Consider means for handling large population sizes

- Start with a simple system, either as run-up to version 1 or with sufficient generality to be extensible to more complex systems

It seems to me that producing a thoroughly-vetted and tested platform that covers fewer cases is far better than producing a large, unwieldy, and bug-ridden product whose output cannot be trusted.

I'm not a population geneticist or indeed any kind of evolutionary biologist whatsoever. But it's my impression that Sanford is saying nothing new; he's just trying to repackage issues that pop gen people have known about for decades.

What's new is his claim that meltdown affects sexual populations. I should check the evolution of sex literature, I'm sure they (Sally Otto and Nick Barton, amongst others) showed that it doesn't happen. In his book Sanford ignores the recent evolution of sex literature.

Wes -

Quote

OK, why are there still Amoeba dubia around?

Indeed - hasn't it turned into Amoeba dubya?

Anyway, remember that Sanford is a YEC, so millions of years aren't relevant for him.

--------------It is fun to dip into the various threads to watch cluelessness at work in the hands of the confident exponent. - Soapy Sam (so say we all)

Of course, I tend to think that a good approach to critique of a program to do a particular task is to actually produce a program that does that task better. I think that is something that we could give some thought to here. Much of the same background work applies to analysis of MA or design of an alternative.

Some ideas:

- Develop a test suite based on published popgen findings in parallel with development

- Base it on the most general, abstract principles for broad applicability

- Aim for number of generations to be limited only by amount of disk or other long-term storage available

- Consider means for handling large population sizes

- Start with a simple system, either as run-up to version 1 or with sufficient generality to be extensible to more complex systems

It seems to me that producing a thoroughly-vetted and tested platform that covers fewer cases is far better than producing a large, unwieldy, and bug-ridden product whose output cannot be trusted.

Wes, How would your proposed project improve on other programs? For example, of the goals that you list, does existing software such as AVIDA or other models not already satisfy you criticisms?

Next, I see that there are two goals. The first is to refute lame ass creatocrap like "“Mendel's Accountant provides overwhelming empirical evidence that all of the "fatal flaws" inherent in evolutionary genetic theory are real. This leaves evolutionary genetic theory effectively falsified--with a degree of certainty that should satisfy any reasonable and open-minded person.”

The second would be to actually advance the scientific work of evo simulations.

I might be able to assist the first, and I am happy to leave the second to the rest of you.

Your list of ideas do add to the refutation of the creatocrap, as they are features of what a good simulator should be able to do.

--------------"Science is the horse that pulls the cart of philosophy."