Thoughts on the “C-Value Enigma”, the “Onion Test” and “Junk DNA”

This morning I was observing some of the recent comment thread activity on Uncommon Descent, and my attention was drawn to this comment by Nick Matzke on the subject of the “onion test” argument for junk DNA:

I have [The Myth of Junk DNA], and all [Jonathan] Wells does is gloss past T. Ryan Gregory’s onion argument; Wells gives the more important point, the huge variability in genome size as a widespread pattern, much attention at all. Considering Wells’s book is the definitive ID treatment of the junk DNA issue, and us ID critics have been bashing ID for its complete failure on the genome-size variability issue for years, this was a huge omission on Wells’s part.

Here, I offer a few thoughts on this fascinating subject.

What is the “Onion Test”?

Briefly stated, the “onion test” (which originates with T. Ryan Gregory) observes that onion cells have many times more DNA than we do. And since the onion is considered to be relatively simple as compared to the human, this discrepancy can only be accounted for within the context of the view that much of its DNA is, in fact, junk. This phenomenon is also known as the “C-value enigma”, and describes the lack of correlation (among eukaryotes) with respect to genome size and organismal complexity. The human genome comprises about 3 billion base pairs of DNA: Compare this to the genome size of Amoeba dubia (670,000,000,000 bp). Indeed, the human genome is only marginally bigger than that of C. elegans and D. melanogaster. In amphibians, the smallest genomes are just shy of 10 billion base pairs, while the largest are nearly 10^11 base pairs. Interestingly, the C-value paradox does not seem to apply to bacteria.

One Critical Assumption

This whole argument for junk DNA seems to rest on the critical assumption that having seemingly excessive amounts of repetitive DNA has no positive bearing on an organism’s physiology. But this assumption has been invalidated by the scientific evidence.

Transcriptional Delays and Timing Mechanisms During Development

One correlation which has been established is that highly-expressed genes tend to have short introns (Castillo-Davis et al., 2002), a likely reflection of the selective-pressure on transcriptional economy with respect to very highly expressed genes. Other genes are rich in introns: such as the 2400 kb human dystrophin gene, 99% of which is comprised of introns. The time taken to transcribe this gene into mRNA adds up to about 16 hours (Tennyson et al., 1995). To take another example, consider the Y chromosomal loci of Drosophila, which are extremely long — spanning millions of bases and consisting largely of introns. During the G2 phase of the primary spermatocyte (and only in that phase of that cell lineage) the Y chromosome unfolds to form species-specific nuclear architectures. A locus such as DhDhc7(Y) is transcribed over the course of two to three days to give rise to a ~5,100,000 nucleotide pre-mRNA (see Reugels et al., 2000; Piergentili et al., 2007; and Redhouse et al., 2011).

The time taken to transcribe respective stretches of DNA is not inconsequential to physiological fitness. Indeed, Swinburne and Silver (2010) explain,

Transcriptional delays were first invoked in 1970 while discussing biological timing for lambda phage and their use of long, late operons (Watson, 1970). Recognizing correlations between gene size and developmental timing, David Gubb later noted that the Drosophila Antennapedia (Antp) and Ultrabithorax (Ubx) genes owe their extreme lengths to large introns and formally introduced the intron delay hypothesis (Gubb, 1986). With the knowledge that the development of the fly’s body plan is sensitive to the proper expression of these genes in space and time, Gubb proposed that intron length could function as a time delay and aid the orchestration of gene expression patterns.

The paper further observes,

If intron delays have critical roles during developmental programs, then expression networks that depend on intron delays should be sensitive to perturbation of transcription elongation rates. Phenomena supporting this logic emerged in the genetic system of Danio rerio. The foggy and pandora mutants were identified for defects in both heart and neural development with the additional phenotype of shorter tails (Guo et al., 1999; Stainier et al., 1996). The mutants were mapped to the transcription elongation factors Spt5 and Spt6 (Cooper et al., 2005; Guo et al., 2000; Keegan et al., 2002). The nature of these mutants suggests critical roles for transcription elongation rates in the development of particular tissues and cell types. In the pandora (Spt6) background, researchers found that the transcripts of tbx20 (hrT), which encodes a protein required for heart development, are expressed inappropriately late during development and in the incorrect location when compared with wild-type (Griffin et al., 2000). While the molecular mechanism underlying this correlation might entail transcription initiation, elongation, RNA processing, or some combination thereof, the line of evidence suggests that transcriptional kinetics have important roles during vertebrate development.

Read the full paper for a list of further examples of this phenomenon.

Could Varying Genome Sizes Reflect Levels of Alternative Splicing?

Perhaps some of the C-value enigma can be accounted for in terms of alternative splicing and alternative polyadenylation. Alternative splicing allows the exons of pre-mRNA transcript to be spliced into a number of different isoforms to produce multiple proteins from the same transcript, as shown in the diagram above. It is known that the level of alternative splicing exhibited in humans (about 90% — perhaps more — with an average of 2 or 3 transcripts per gene) is much higher than that for C. elegans (about 22%, with less than 2 transcripts per gene). This may, in part, explain why humans have only marginally more genes than C. elegans, which is otherwise seemingly paradoxical given the complexity of humans as compared to the roundworm. Moreover, bacteria do not undergo alternative splicing — which may, in some measure, explain their exemption from the C-value enigma.

Varying Preponderances of Transcription Factors

Approximately 10% of human genes code for transcription factors (a special class of protein which binds to specific sequences of DNA, namely, enhancers or promoters which are adjacent to genes which they regulate the expression of). In contrast, only about 5% of yeast genes code for transcription factors. When coupled with a much larger network of transcriptional enhancers and promoters, such a difference could result in a much larger set of gene expression patterns. This could lead to a non-linear increase in organismal complexity (see Levine and Tjian, 2003).

Are There Limiting Factors on Genome Size?

In 2002, Andrew George published a paper in Trends in Immunology, entitled, “Is the number of genes we possess limited by the presence of an adaptive immune system?” In the paper, he argued that the number of genes is limited in organisms which possess an adaptive immune system by the burden of self-recognition. As the paper explains,

The factors that are important in limiting the number of functional genes contained within the genome of an organism are presently unknown. Here, it is suggested that in organisms that contain an adaptive immune response, the number of genes in the genome might be limited by the need to delete autoreactive T cells, thus preventing autoimmunity. The more genes an organism has, the more autoantigens are generated, necessitating an increase in the proportion of T cells that are deleted.Is human complexity limited by the presence of an immune system? Although immunity is vital for health, the need to be tolerant to all ‘self’ molecules could restrict the number of genes in our genome.

A further correlation, which has been established, is that organisms with rapid development typically have lower C-values, presumably because they don’t have time to replicate lots of DNA between cell divisions.

There is a strong positive correlation, however, between the amount of DNA and the volume of a cell and its nucleus — which affects the rate of cell growth and division. Furthermore, in mammals there is a negative correlation between genome size and teh rate of metabolism. Bats have very high metabolic rates and relatively small genomes. In birds, there is a negative correlation between C-value and resting metabolic rate. In salamanders, there is also a negative correlation between genome size and the rate of limb regeneration

In the case of bacteria, which have single replicons per chromosome, they face selective pressure to limit the accumulation of non-genic DNA which might make the replication times longer and thus slow rates of reproduction. This means that their genome size is correlated with gene number, and thus increases in proportion to structural and metabolic complexity.

The graph shows a clear quantitative correlation between cell volume and DNA content. The trap into which the “junk DNA” advocate has fallen — as he so often does — lies with the (erroneous) assumption that all functions associated with DNA are sequence-dependent. But this need not universally be the case (in fact, it has long been shown not to be). This correlation holds not only true of vertebrate animals, but also for plants and unicellular eukaryotes (protozoa). It has been suggested by many that DNA possesses a structural role in controlling nuclear volume, cell size and cell-cycle length. Cavalier-Smith explains that, with increased cell size, “there is positive selection for a corresponding increase in nuclear volume; it is generally easier to achieve this by increasing the amount of DNA rather than by altering its folding parameters.”

Nuclear volume is probably functionally important for initiation of DNA replication and the transition from G1 to S: replication appears to initiate and terminate at the nuclear periphery and require a critical nuclear volume for onset (Nicolini et al., 1986); G1 nuclear volume growth must depend on concerted expansion of both chromatin and the nuclear envelope. But the significance of nuclear volume for the evolution of genome size does not depend on this, but on its fundamental significance for transcription, RNA processing and export, the rates of which must universally be adjusted to the rate of cytoplasmic protein synthesis. This unavoidable need for an optimal nuclear/cytoplasmic (karyoplasmic) ratio to allow balanced growth of actively growing and dividing eukaryotic cells means that larger cells must evolve proportionally larger nuclei. They can do that only by having larger genomes or unfolding DNA more; the former is mutationally much easier and quantitatively less limited and therefore predominates during evolution. Selection for economy means that smaller cells must have smaller nuclei. Mutations expanding or contracting the genome are always occurring with high frequency and will be selected long before any changing DNA folding patterns radically occur. Those are the fundamental reasons why genome size increases in larger cells and decreases in smaller ones. Bacteria, chloroplasts or mitochondria have no nuclear envelope attached to their DNA and no segregation of RNA and protein synthesis in two fundamentally different compartments; that is why their genome evolution follows different scaling laws: there is no selection for larger genomes in larger bacterial cells.

Summary & Conclusion

In summary, to point to the C-value paradox — or the so-called “onion test” — as evidence for the preponderance of junk or nonsensical DNA within animal genomes is based on several critical assumptions which are contradicted by recent data. The common naive supposition that having a larger genome size is neither here nor there in terms of organismal physiology has been shown to be untenable. With the ever-increasing expansion of our knowledge of the nature and functional inter-relatedness of the genome, those who choose to continue using the “junk DNA” argument as a club with which to beat intelligent design should find these facts disconcerting.

29 Responses to Thoughts on the “C-Value Enigma”, the “Onion Test” and “Junk DNA”

“Wells gives the more important point, the huge variability in genome size as a widespread pattern, NOT much attention at all.”

Typo on my part.

In response to the argument — if most of the genome has merely “sequence independent” function, then Stephen Meyer’s statement that the genome is “chock-full of information” is unsupportable.

Re: C. elegans — (a) C. elegans has about the same number of genes as humans. (b) Your argument is that 3 times the alternative splicing requires 30 times the genome size. Are you also saying that onions have another 3 or 6 times more alternative splicing. And some onions have more than others.

Re: cell-volume. That correlation to genome size is well-known, and I have highlighted it many times. This may be the first time I’ve seen an ID advocate bring up in a vaguely serious way. The problem with the explanation is that there are likely to be all kinds of ways to regulate cell volume, and genome size seems to be the crudest possible one. An alternative explanation is that larger cells = slower growing = fewer generations = weaker selection against the expense and time of replicating larger genomes = larger genomes. Same correlation, reversed causation.

Nick Matzke, not to step in between you and Jonathan’s discussion, but I brought up a ‘minor’ point in the previous ‘Higgs’ post, which inspired this thread, a point which I have brought to you before to, and a few other atheistic neo-Darwinists. A point that you never addressed. In fact the point is usually completely ignored, by other neo-Darwinists, or simply rationalized away as inconsequential with a wave of the hand. Yet ‘the point’ in fact falsifies the entire theory of neo-Darwinism by undermining the materialist foundation upon which it is built. Thus, far from being inconsequential to neo-Darwinism, ‘the point’ is in fact of great scientific importance. Here is ‘the point':

Neo-Darwinian evolution purports to explain all the wondrously amazing complexity of life on earth by reference solely to chance and necessity processes acting on energy and matter (i.e. purely material processes). In fact neo-Darwinian evolution makes the grand materialistic claim that the staggering levels of unmatched complex functional information we find in life, and even the ‘essence of life’ itself, simply ‘emerged’ from purely material processes. And even though this basic scientific point, of the ability of purely material processes to generate even trivial levels of complex functional information, has spectacularly failed to be established, we now have a much greater proof, than this stunning failure for validation, that ‘put the lie’ to the grand claims of neo-Darwinian evolution. This proof comes from the fact that it is now shown from quantum mechanics that ‘information’ is its own unique ‘physical’ entity. A physical entity that is shown to be completely independent of any energy-matter space-time constraints, i.e. it does not ‘emerge’ from a material basis. Moreover this ‘transcendent information’ is shown to be dominant of energy-matter in that this ‘information’ is shown to be the entity that is in fact constraining the energy-matter processes of the cell to be so far out of thermodynamic equilibrium.

notes:

Falsification of neo-Darwinism;

First, Here is the falsification of local realism (reductive materialism).

Here is a clip of a talk in which Alain Aspect talks about the failure of ‘local realism’, or the failure of reductive materialism, to explain reality:

The falsification for local realism (reductive materialism) was recently greatly strengthened:

Physicists close two loopholes while violating local realism – November 2010
Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview.http://www.physorg.com/news/20.....alism.html

Quantum Measurements: Common Sense Is Not Enough, Physicists Show – July 2009
Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables.http://www.sciencedaily.com/re.....142824.htm

(of note: hidden variables were postulated to remove the need for ‘spooky’ forces, as Einstein termed them — forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light.)

Quantum entanglement holds together life’s blueprint – 2010
Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours (arxiv.org/abs/1006.4053v1). “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford.http://neshealthblog.wordpress.....blueprint/

The relevance of continuous variable entanglement in DNA – July 2010
Excerpt: We consider a chain of harmonic oscillators with dipole-dipole interaction between nearest neighbours resulting in a van der Waals type bonding. The binding energies between entangled and classically correlated states are compared. We apply our model to DNA. By comparing our model with numerical simulations we conclude that entanglement may play a crucial role in explaining the stability of the DNA double helix.http://arxiv.org/abs/1006.4053v1

Quantum Information confirmed in DNA by direct observation here;

DNA Can Discern Between Two Quantum States, Research Shows – June 2011
Excerpt: — DNA — can discern between quantum states known as spin. – The researchers fabricated self-assembling, single layers of DNA attached to a gold substrate. They then exposed the DNA to mixed groups of electrons with both directions of spin. Indeed, the team’s results surpassed expectations: The biological molecules reacted strongly with the electrons carrying one of those spins, and hardly at all with the others. The longer the molecule, the more efficient it was at choosing electrons with the desired spin, while single strands and damaged bits of DNA did not exhibit this property.http://www.sciencedaily.com/re.....104014.htm

The necessity of transcendent information, to ‘constrain’ a cell, against thermodynamic effects is noted here:

Information and entropy – top-down or bottom-up development in living systems? A.C. McINTOSH
Excerpt: This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.http://journals.witpress.com/paperinfo.asp?pid=420

i.e. It is very interesting to note that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints, should be found in molecular biology on such a massive scale, for how can the quantum entanglement ‘effect’ in biology possibly be explained by a material (matter/energy space/time) ’cause’ when the quantum entanglement ‘effect’ falsified material particles as its own ‘causation’ in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as neo-Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the energy/matter particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself
not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘specified’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place!
,,,To refute this falsification of neo-Darwinism, one must overturn Alain Aspect, and company’s, falsification of local realism (reductive materialism)!

=================

Alain Aspect and Anton Zeilinger by Richard Conn Henry – Physics Professor – John Hopkins University
Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the “illusion” of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one’s own mind is sure to exist). (Dr. Henry’s referenced experiment and paper – “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 – “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007

,,,Encoded ‘classical’ information such as what Dembski and Marks demonstrated the conservation of, such as what we find encoded in computer programs, and yes as we find encoded in DNA, is found to be a subset of ‘transcendent’ quantum information by the following method:,,,

This following research provides solid falsification for Rolf Landauer’s decades old contention that the information encoded in a computer is merely physical (merely ‘emergent’ from a material basis) since he believed it always required energy to erase it;

Quantum knowledge cools computers: New understanding of entropy – June 2011
Excerpt: No heat, even a cooling effect;
In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”http://www.sciencedaily.com/re.....134300.htm

,,,And to dot the i’s, and cross the t’s, here is the empirical confirmation that quantum information is in fact ‘conserved';,,,

Quantum no-hiding theorem experimentally confirmed for first time
Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment.http://www.physorg.com/news/20.....tally.html

Further note:

Three subsets of sequence complexity and their relevance to biopolymeric information – Abel, Trevors
Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC (Functional Sequence Complexity). FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,,

Testable hypotheses about FSC

What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses:

Null hypothesis #4
Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time.

We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified.http://www.tbiomed.com/content/2/1/29

verse and music:John 1:1-3

In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made.

Actually, C. elegans has about 20,000 genes. We have 30,000. So they’re not quite the same size.

I was not arguing that any one factor is responsible for the lack of correlation. What I am saying is that there is a plethora of factors which make the issue far more complex than it is often portrayed.

I’m not the first ID advocate to bring up cell-volume. Richard Sternberg has also alluded to it in his discussion on the topic. I recommend reading the Cavalier-Smith paper on the subject, as he argues convincingly that a larger cell volume requires more DNA.

There’s a lot of seemingly “extra” DNA in onions. Who’s to say that extra DNA even has anything to do with onions? Maybe it contains the extra data needed to get a giraffe from a tapir or a bat from a rodent.

I have no idea. I’m not suggesting that any of this is the case. But we clearly do not understand what all DNA is for, just as we don’t understand half of what we see in biology, such as that bats resemble rodents but have wings and echolocation.

I’m not saying that we should chase down wild, crazy ideas. But darwinism has a limiting, narrowing effect. It forces us to think inside a very small box. Maybe the extra DNA is junk. Maybe it’s informational, not functional. Perhaps it has some very specific purpose unrelated to onions.

I’m not saying any of this is so. I don’t believe it myself. But darwinism is a science-stopper. It’s the small-minded teacher who tells the next Einstein to quit asking stupid questions and just read the book. Perhaps only someone who ignores it can figure out what the extra onion DNA is for.

“Briefly stated, the “onion test” (which originates with T. Ryan Gregory) observes that onion cells have many times more DNA than we do. And since the onion is considered to be relatively simple as compared to the human, this discrepancy can only be accounted for within the context of the view that much of its DNA is, in fact, junk.”

Boy, I hope that’s not the argument. It would be about as stupid as most of the other “junk DNA” statements over the years. The funny part is that you have an increasing number of researchers (yes, even those who believe evolution through and through) recognizing that the whole idea of “junk DNA” is not helpful and that there are many functions for non-coding DNA. Then at the same time you have these almost militant orthodox Darwinists who keep repeating “junk DNA” “junk DNA” with their fingers stuck in their ears. Kind of sad, actually.

The junk DNA argument is just a variant on the failed and invalid family of “bad design” arguments that are based, not on science in most cases, but on the philosophical/religious preferences of those who espouse them.

[On this, you are correct. I stand corrected. I have amended the article above. JM]

Re: Cavalier-Smith. I’ve read basically all of his work, including his 1985 book, IIRC with the title Evolution of Genome Size. He advocates skeletal DNA there also. I found it an appealing theory for quite awhile, until I saw others point out that it seems like the relationship between number of protein products and cell volume could be regulated in a much simpler way by just tuning up and down gene expression. T. Ryan Gregory is the leader in the generation after Cavalier-Smith, and he points out other issues.

But, even if Cavalier-Smith’s idea is true, the genome isn’t “chock-full of information”, it’s “chock-full of spacer”. Even if it’s functional, “sequence-independent” DNA is “functional” in the same way that the junk in a landfill that then becomes an island park or hill park is functional.

I am familiar with Sternberg’s and a few others extremely minimal discussions of the issue.

When hundreds of ID fans are going around for years decrying the death of junk DNA and the bias and ignorance of the “Darwinists”, and when *the most important data* in the junk DNA debate — data which deals with most of the DNA (90%+) in large genomes, pseudogenes and regulatory DNA are a tiny fraction compared to the variable, repetitive fraction — a very few brief mentions don’t cut it. They either represent a simple not understanding of the relevant scale of the issues, or they represent an attempt to hide the most relevant data from their innocent antievolutionist readership.

That’s not really the argument. The argument also involves the fact that different onions have different amounts of DNA — one onion can have several human genome’s worth of DNA more DNA than another onion.

It’s not exactly an argument FOR junk, either. Gregory himself has taken the opinion that use of the word creates more heat than light because of the strong emotions that are raised by calling something “junk”. BUT — the onion test is a challenge to the large group of people — lots of creationists/IDists, lots of journalists, some scientists, some cranks — who make bold declarations about how most/all DNA is functional and how stupid scientists were to ever think that a lot of it was junk. Your proposed function(s) had better be able to pass the Onion Test, or else you really haven’t got any evidence that most DNA is functional.

There’s a lot of seemingly “extra” DNA in onions. Who’s to say that extra DNA even has anything to do with onions? Maybe it contains the extra data needed to get a giraffe from a tapir or a bat from a rodent.

I have no idea. I’m not suggesting that any of this is the case. But we clearly do not understand what all DNA is for, just as we don’t understand half of what we see in biology, such as that bats resemble rodents but have wings and echolocation.

I’m not saying that we should chase down wild, crazy ideas. But darwinism has a limiting, narrowing effect. It forces us to think inside a very small box. Maybe the extra DNA is junk. Maybe it’s informational, not functional. Perhaps it has some very specific purpose unrelated to onions.

I’m not saying any of this is so. I don’t believe it myself. But darwinism is a science-stopper. It’s the small-minded teacher who tells the next Einstein to quit asking stupid questions and just read the book. Perhaps only someone who ignores it can figure out what the extra onion DNA is for.

Is that the best you’ve got? You are expressing total mystification at the huge variability in amounts of DNA in different genomes, and yet you have no words of criticism for the vast numbers of IDists/creationists who have declared to the word that it is all/almost all functional, and that scientists were idiots for thinking that a lot of it wasn’t functional? Good luck convincing scientists with that…

Oh, no ScottAndrews. We don’t know what the “extra” onion DNA is, and, therefore, it has no function. And it must be junk. Just like the human junk DNA we’ve told you about for all these years, (which unfortunately turned out to have a function and even though we said it wouldn’t have function we were just kidding), but we’ve found unknown DNA in another organism (so there!), and it has just got to be junk, because we don’t know its function. And no creator worth his salt would do that (I’m referring to the creator that doesn’t exist, but if one did exist, then I know what the creator would be like because I have a special capacity to know what any non-existent creator would be like, if there were to be such a creator, which there isn’t), so it must not have been created, but must have come about from some bumbling, mistake-prone natural process. Oh, and by the way, that’s what our theory predicted — lots of junk. Except for the times when nature stumbled upon exquisite design. In those cases our theory predicted that nature would produce exquisite design. Either way our theory is true. And don’t bother looking for function in that extra DNA, because we don’t know of a function and, therefore, it must be junk. There is nothing of value to learn here with this extra DNA. Please stop trying to learn something more about it. Just accept that it is useless junk and that it proves our theory is right. Oh, and by the way, by making a philosophical/religious argument from our ignorance, we are not stopping “science,” because “science” can only deal with things that come about naturally by chemistry and physics, so we are definitely supporting true science.

Only a person as wise and learned as yourself could express contempt for someone mystified by nature, even a simple onion. If only I could share your thoughts for a moment and see the mysteries of the universe unraveled and exposed before my eyes. (Except perhaps for that onion DNA.)

I don’t know why there’s extra DNA. Neither do you. Some people are smart enough to ask. Some are even smarter – they already know everything so they slap a label on it.

Good luck convincing scientists with that…

I won’t speak to scientists in general. But if wanted to impress you I would stop reasoning, stop asking questions, and just make crap up. I already know it works.

The following describes how quantum entanglement is related to functional information:

Quantum Entanglement and Information
Excerpt: A pair of quantum systems in an entangled state can be used as a quantum information channel to perform computational and cryptographic tasks that are impossible for classical systems.http://plato.stanford.edu/entries/qt-entangle/

Anton Zeilinger, a leading researcher in Quantum mechanics, relates how quantum entanglement is related to quantum teleportation in this following video;

Quantum Teleportation
Excerpt: To perform the teleportation, Alice and Bob must have a classical communication channel and must also share quantum entanglement — in the protocol we employ*, each possesses one half of a two-particle entangled state.http://www.cco.caltech.edu/~qoptics/teleport.html

And quantum teleporation has now shown that atoms, which are suppose to be the basis from which all functional information ‘emerges’ in life, in the atheistic neo-Darwinian framework, are now shown to be, in fact, reducible to the transcendent functional quantum information that the atoms were suppose to be the basis of!

Ions have been teleported successfully for the first time by two independent research groups
Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original (that was destroyed),,,http://www.rsc.org/chemistrywo.....ammeup.asp

Atom takes a quantum leap – 2009
Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,,
“What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second.http://www.freerepublic.com/fo.....1769/posts

Thus the burning question, that is usually completely ignored by the neo-Darwinists I’ve asked in the past, is, “How can quantum information/entanglement possibly ‘emerge’ from any material basis of atoms in DNA when entire atoms are now shown to reduce to transcendent quantum information in the first place in these teleportation experiments??? i.e. It is simply completely IMPOSSIBLE for the ’cause’ of transcendent functional quantum information, such as we find on a massive scale in DNA and proteins, to reside within, or ever ’emerge’ from, any basis of material particles!!! Despite the virtual wall of silence I’ve seen from neo-Darwinists thus far, this is not a small matter in the least as far as developments in science have gone!!

Well since neo-Darwinists have tried to establish some fairly contentious points of common ancestry, and even tried to establish ‘theological points’ of ‘God would not have done it that way, using ‘junk DNA’ as their starting presumption, then the validity of whether or not we are actually dealing with junk DNA has become important.,,, And remember atheistic neo-Darwinists are using this line of ‘junk DNA’ argumentation despite the fact that we now know the coding in DNA is vastly superior to anything man has ever accomplished in his most advanced computer programs, and despite the fact that this is almost exactly the same line of argumentation that was used for decades by atheistic neo-Darwinists for ‘vestigial’ organs;

Demolishing Junk DNA as an icon of evolution – July 2011
Excerpt: “The genome is hierarchical, and it functions at three levels: the DNA molecule itself; the DNA-RNA-protein complex that makes up chromatin; and the three-dimensional arrangement of chromosomes in the nucleus. At all three of these levels, DNA can function in ways that are independent of its exact nucleotide sequence.” (p.93) [. . .] “At the third level, the position of the chromosome inside the nucleus is important for gene regulation. In most cells, the gene-rich portions of chromosomes tend to be concentrated near the center of the nucleus, and a gene can be inactivated by artificially moving it to the periphery. In some cases, however, the pattern is inverted: rod cells in the retinas of nocturnal mammals contain nuclei in which the non-protein-coding parts of chromosomes are concentrated near the center of the nucleus, where they form a liquid crystal that serves to focus dim rays of light.” (p.94-5) (The Myth of Junk DNA)http://www.arn.org/blogs/index.....n_of_evolu

The Myth of Junk DNA Grows With the Telling – July 2011
Since the publication of Jonathan Wells’ The Myth of Junk DNA, many articles have come out documenting more functions for non-protein-coding DNA. It looks like Dr. Wells sampled the water just as the tide was starting to come in, and it’s still rising. Richard Dawkins, Larry Moran, and other proponents of junk DNA should move to higher ground.http://www.evolutionnews.org/2.....48311.html

Among the most blatant failed predictions of materialists is this one. For many years materialists predicted much of human anatomy was vestigial (useless and leftover evolutionary baggage). Yet once again, they were proven completely wrong in this prediction.

“The thyroid gland, pituitary gland, thymus, pineal gland, and coccyx, … once considered useless by evolutionists, are now known to have important functions. The list of 180 “vestigial” structures is practically down to zero. Unfortunately, earlier Darwinists assumed that if they were ignorant of an organ’s function, then it had no function.”
“Tornado in a Junkyard” – book – by former atheist James Perloff

For a prime example of evolution’s failed predictions of vestigial organs, recently in October 2007, the appendix has been found to have essential purpose in the human body:

Surgical removal of the tonsils and appendix associated with risk of early heart attack – June 2011
Excerpt: The surgical removal of the appendix and tonsils before the age of 20 was associated with an increased risk of premature heart attack in a large population study performed in Sweden. Tonsillectomy increased the risk by 44% (hazard ratio 1.44) and appendectomy by 33% (HR 1.33). The risk increases were just statistically significant, and were even higher when the tonsils and appendix were both removed.http://medicalxpress.com/news/.....html#share

Further notes:

Human Genome “Infinitely More Complex” Than Expected – April 2010
Excerpt: Hayden acknowledged that the “junk DNA” paradigm has been blown to smithereens. “Just one decade of post-genome biology has exploded that view,” she said,,,, Network theory is now a new paradigm that has replaced the one-way linear diagram of gene to RNA to protein. That used to be called the “Central Dogma” of genetics. Now, everything is seen to be dynamic, with promoters and blockers and interactomes, feedback loops, feed-forward processes, and “bafflingly complex signal-transduction pathways.”http://www.creationsafaris.com.....#20100405a

Systems biology: Untangling the protein web – July 2009
Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. “Combine all this and you can start to think that maybe some of the information flow can be captured,” he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. “The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent,” he says. “The simple pathway models are a gross oversimplification of what is actually happening.”http://www.nature.com/nature/j.....0415a.html

“Today there is an explosion of knowledge going on in the study of gene regulatory networks. But it is not led, assisted, or even inspired by the theory of evolution. “We have little empirical knowledge on the evolutionary history of such networks.”– Dean, Antony M., Joseph W. Thornton. September 2007. Mechanistic approaches to the study of evolution: the functional synthesis. Nature Reviews Genetics, Vol. 8, pp. 675-688.http://www.newgeology.us/presentation32.html

DNA Caught Rock ‘N Rollin': On Rare Occasions DNA Dances Itself Into a Different Shape – January 2011
Excerpt: Because critical interactions between DNA and proteins are thought to be directed by both the sequence of bases and the flexing of the molecule, these excited states represent a whole new level of information contained in the genetic code,http://www.sciencedaily.com/re.....104244.htm

3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell – Oct. 2009
Excerpt: the information density in the nucleus is trillions of times higher than on a computer chip — while avoiding the knots and tangles that might interfere with the cell’s ability to read its own genome. Moreover, the DNA can easily unfold and refold during gene activation, gene repression, and cell replication.http://www.sciencedaily.com/re.....142957.htm

It is simply ‘criminally blind’, for a supposedly ‘non-partial’ scientist, to look at this evidence and argue as forcefully for junk DNA as Nick has. The impartial scientist, who was geniuinely concerned with finding the truth, would surely be humbled by such staggering levels of poly-functional complexity!!! But alas, we are not dealing with impartial scientist are we Nick???

It is my position that introns do or did have a function. This does not mean that every single “chunk” of introns are functional. This means that the original introns did have functions; but if whole-genome duplication occurred then the duplicated introns might become functionless, since there would be no selective pressure to keep them functional. This would be the answer to the onion test, effectively.

Non-functional DNA isn’t an issue for either position. What is and has been an issue ad nauseam has been the argument that purported ‘junk DNA’ is reason to discredit ID [Miller, Shermer, Coyne et al], similar to the ‘bad design’ variant pointed out by Kurt.

Oh really? It wasn’t the evolutionists who made the IDists write books like Wells’s The Myth of Junk DNA or Stephen Meyer’s Signature in the Cell — the latter of which asserts that the genome is “chock full” of information.

“Thus, far from being dispersed sparsely, haphazardly, and inefficiently within a sea of nonfunctional sequences (one that supposedly accumulated by mutation), functional genetic information is densely concentrated on the DNA molecule.” (p. 461)

“Far from containing a preponderance of “junk” – nonprotein-coding regions that supposedly perform no function – the genome is dominated by sequences rich in functional information.” (p. 461)

Furthermore, says Meyer, not only is this established truth, but it is a prediction of ID theory, and furthermore it was predicted by ID advocates a decade or more ago:

“The genome does display evidence of past viral insertions, deletions, transpositions, and the like, much as digital software copied again and again acumulates errors. Nevertheless, the vast majority of base sequences in the genome, and even the many sequences that do not code for proteins, serve essential biological functions. Genetic signal dwarfs noise, just as design advocates would expect and just as they predicted in the early 1990s.” (p. 461)

However, at numerous places in the book, Meyer notes (correctly) that repetitive sequences have little information:

“Since information and improbability are inversely related, high-probability repeating sequences like ABCABCABCABCABCABC have very little information (either carrying capacity or content). And this makes sense too. Once you have seen the first triad of ABCs, the rest are “redundant”; they convey nothing new. They aren’t informative. Such sequences aren’t complex either. Why? A short algorithm or set of commands could easily generate a long sequence of repeating ABCs, making the sequence compressible.” (p. 107)

Unfortunately for Meyer, he seems to not realize that 40-50% of the human genome (and most animal genomes of similar size) consists of LINEs, SINEs, segmental duplications, and other repeating elements.

It appears that DNA is but the list of ingredients, stumbled upon and arrogantly and very prematurely counted as grasped or even understood. The recipe is somewhere else, deeper and more complex and wonderful.

notes as to how detached from reality Matzke is with his junk DNA assertions::

Astonishing DNA complexity demolishes neo-Darwinism
Alex Williams
Excerpt:
Functional junk?
The ENCODE project did confirm that genes still form
the primary information needed by the cell—the protein producing code—even though much greater complexity has now been uncovered. Genes found in the ENCODE project differ only about 2% from the existing catalogue. The astonishing discovery of multiple overlapping transcripts in every part of the DNA was amazing in itself, but the extent of the overlaps are huge compared to the size of a typical gene. On average, the transcripts are 10 to 50 times the size of a typical gene region, overlapping on both sides. And as many as 20% of transcripts range up to more than 100 times the size of a typical gene region. This would be like photocopying a page in a book and having to get information from 10, 50 or even 100 other pages in order to use the information on that one page. The non-protein-coding regions (previously thought to be junk) are now called untranslated regions (UTRs) because while they are transcribed into RNA, they are not translated into protein. Not only has the ENCODE project elevated UTRs out of the ‘junk’ category, but it now appears that they are far more active than the translated regions (the genes), as measured by the number of DNA bases appearing in RNA transcripts. Genic regions are transcribed on average in five different overlapping and interleaved ways, while UTRs are transcribed on average in seven different overlapping and interleaved ways. Since there are about 33 times as many bases in UTRs than in genic regions, that makes the ‘junk’ about 50 times more active than the genes.http://creation.com/images/pdf.....11-117.pdf

further excerpt from preceding paper:

• About 93% of the genome is transcribed (not 3%,
as expected). Further study with more wide-ranging
methods may raise this figure to 100%. Because much
energy and coordination is required for transcription
this means that probably the whole genome is used by
the cell and there is no such thing as ‘junk DNA’.
• Exons are not gene-specific but are modules that can
be joined to many different RNA transcripts. One exon
(i.e. a protein-making portion of one gene) can be used
in combination with up to 33 different genes located
on as many as 14 different chromosomes. This means
that one exon can specify one part shared in common
by many different proteins.
• There is no ‘beads on a string’ linear arrangement of
genes, but rather an interleaved structure of overlapping
segments, with typically five, seven or more transcripts
coming from just one segment of code.
• Not just one strand, but both strands (sense and antisense)
of the DNA are fully transcribed.
• Transcription proceeds not just one way but both
backwards and forwards.
• Transcription factors can be tens or hundreds of
thousands of base-pairs away from the gene that they
control, and even on different chromosomes.
• There is not just one START site, but many, in each
particular gene region.
• There is not just one transcription triggering (switching)
system for each region, but many.
The authors concluded:
‘An interleaved genomic organization poses
important mechanistic challenges for the cell. One
involves the [use of] the same DNA molecules for
multiple functions. The overlap of functionally
important sequence motifs must be resolved in time
and space for this organization to work properly.
Another challenge is the need to compartmentalize
RNA or mask RNAs that could potentially form
long double-stranded regions, to prevent RNARNA
interactions that could prompt apoptosis
[programmed cell death].’

other notes:

The Reality of Pervasive Transcription – July 2011
Excerpt: Current estimates indicate that only about 1.2% of the mammalian genome codes for amino acids in proteins. However, mounting evidence over the past decade has suggested that the vast majority of the genome is transcribed, well beyond the boundaries of known genes, a phenomenon known as pervasive transcription.http://www.plosbiology.org/art.....io.1000625

How The Junk DNA Hypothesis Has Changed Since 1980 – Richard Sternberg – Oct. 2009 – Excellent Summary
Excerpt: A surprising finding of ENCODE and other transcriptome projects is that almost every nucleotide of human (and mouse) chromosomes is transcribed in a regulated way.http://www.evolutionnews.org/2.....is_ha.html

Shoddy Engineering or Intelligent Design? Case of the Mouse’s Eye – April 2009
Excerpt: — The (entire) nuclear genome is thus transformed into an optical device that is designed to assist in the capturing of photons. This chromatin-based convex (focusing) lens is so well constructed that it still works when lattices of rod cells are made to be disordered. Normal cell nuclei actually scatter light. — So the next time someone tells you that it “strains credulity” to think that more than a few pieces of “junk DNA” could be functional in the cell – remind them of the rod cell nuclei of the humble mouse.http://www.evolutionnews.org/2.....20011.html

As to the underlying assumption of ‘random change’, that is a primary pillar of neo-Darwinian thought, there simply isn’t any ‘randomness to speak of in the genome:

Revisiting the Central Dogma in the 21st Century – James A. Shapiro – 2009
Excerpt (Page 12): Underlying the central dogma and conventional views of genome evolution was the idea that the genome is a stable structure that changes rarely and accidentally by chemical fluctuations (106) or replication errors. This view has had to change with the realization that maintenance of genome stability is an active cellular function and the discovery of numerous dedicated biochemical systems for restructuring DNA molecules.(107–110) Genetic change is almost always the result of cellular action on the genome. These natural processes are analogous to human genetic engineering,,, (Page 14) Genome change arises as a consequence of natural genetic engineering, not from accidents. Replication errors and DNA damage are subject to cell surveillance and correction.http://shapiro.bsd.uchicago.ed.....0Dogma.pdf

I’m predicting, and I don’t know, that giant disparity in genome size not correlated with the organisms was not predicted, wasn’t searched for and the first discoverers were quite surprised. If so it can’t be trumpeted as a confirmation. Correct if wrong someone.

However well you think ‘junk dna theory’ comports with the observation of the huge variability in amounts of DNA in different genomes, such variability is the question at hand. Can you agree that the strength of declaring some part of a system non functional is at least related to exactly how well you know the systems workings? Thorough understanding has yet to be shown.

DNA from the ‘assumed junk’ column has moved to the functional column. Why would one not think that will continue?

Should actual junk be found, the ID position that it resulted from degradation, contamination etc, due to [real]evolutionary and biological-physical-chemical processes acting on a junk-free designed genome would be a far stronger hypothesis, certainly just as consistent with observation. Real evolution as in observations that IDists/creationists and everyone else all believe in – natural selection, mutation rates , allele frequency change etc etc.

Most of that transcription on the sequence level appears to be low-level transcriptional noise (on the bulk level, what gets transcribed is mostly genic and regulatory, with the rest of the genome making up only a small percentage of the transcripts). The relevant enzymes just aren’t that precise, and will transcribe any DNA at some low-but-detectable level. In other words, junk RNA.

Well Nick contrary to your atheistic gut reaction to label everything that you don’t understand in life as ‘junk’, there are ‘cooler heads’, who are not so predisposed to twist science for atheistic propaganda purposes, as you clearly are, people who do ACTUAL research, who would whole heartily disagree with your ‘junk RNA’ assessment!

Nature Reports Discovery of “Second Genetic Code” But Misses Intelligent Design Implications – May 2010
Excerpt: Rebutting those who claim that much of our genome is useless, the article reports that “95% of the human genome is alternatively spliced, and that changes in this process accompany many diseases.” ,,,, the complexity of this “splicing code” is mind-boggling:,,, A summary of this article also titled “Breaking the Second Genetic Code” in the print edition of Nature summarized this research thusly: “At face value, it all sounds simple: DNA makes RNA, which then makes protein. But the reality is much more complex.,,, So what we’re finding in biology are:

# “beautiful” genetic codes that use a biochemical language;
# Deeper layers of codes within codes showing an “expanding realm of complexity”;
# Information processing systems that are far more complex than previously thought (and we already knew they were complex), including “the appearance of features deeper into introns than previously appreciated”http://www.evolutionnews.org/2.....of_se.html

‘Linc-ing’ a noncoding RNA to a central cellular pathway – August 2010
Excerpt: This current work demonstrates that several dozen lincRNAs are targeted directly by p53,http://www.physorg.com/news199625236.html

Most Detailed Annotation of Fruit-Fly Genome Points Way to Understanding All Organisms’ Genomes – December 2010
Excerpt: “We also found an order-of-magnitude increase in the ways that genes are spliced and edited to produce alternate forms of known proteins, thus significantly increasing the complexity of the proteome.”,,, Despite the scrutiny to which the Drosophila genome has been subjected, the researchers found new or altered exons or splice forms in almost three-quarters of Drosophila’s previously annotated genes,,,http://www.sciencedaily.com/re.....131131.htm

moreover there is a very solid scientific reason for presupposing functionality for the entire genome:

Arriving At Intelligence Through The Corridors Of Reason (Part II) – April 2010
Excerpt: ,,, since junk DNA would put an unnecessary energetic burden on cells during the process of replication, it stands to reason that it would more likely be eliminated through selective pressures.http://www.uncommondescent.com.....n-part-ii/

Thus Nick, it certainly seems that the only junk around here, that we can be absolutely 100% certain is junk, is in fact your very own junk science, in that you are forcing your very own philosophical bias onto the evidence prior to investigation, and without any regard for any trends in science that preceded your forced declaration of junk!!!

Experimental Evolution of Gene Duplicates in a Bacterial Plasmid Model
Excerpt: In a striking contradiction to our model, no such conditions were found. The fitness cost of carrying both plasmids increased dramatically as antibiotic levels were raised, and either the wild-type plasmid was lost or the cells did not grow. This study highlights the importance of the cost of duplicate genes and the quantitative nature of the tradeoff in the evolution of gene duplication through functional divergence. http://www.springerlink.com/co.....4014664w8/

This recent paper also found the gene duplication scenario to be highly implausible:

The Extinction Dynamics of Bacterial Pseudogenes – Kuo and Ochman – August 2010
Excerpt: “Because all bacterial groups, as well as those Archaea examined, display a mutational pattern that is biased towards deletions and their haploid genomes would be more susceptible to dominant-negative effects that pseudogenes might impart, it is likely that the process of adaptive removal of pseudogenes is pervasive among prokaryotes.”http://www.evolutionnews.org/2.....37581.html

“Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness,”. Dr. Ann Gauger
Excerpt: Dr. Gauger experimentally tested two-step adaptive paths that should have been within easy reach for bacterial populations. Listen in and learn what Dr. Gauger was surprised to find as she discusses the implications of these experiments for Darwinian evolution.http://intelligentdesign.podom.....4_13-07_00

Geez if genes code for proteins, and there are more than 100,000 proteins, that should tell people (especially scientists) that there are more than 30,000 genes.

And thanks to alternative gene splicing, there are. Ya see alternative gene splicing can take what appears to be one gene and make several genes out of it- that is several different proteins can come from one gene by rearranging the exons of any one particular gene.

But to do that takes knowledge- knowledge of editing, splicing- what to edit, what to splice, the order of the splicing- knowledge and only intelligence has the capabilities to pull that off.

As for Nick’s “anyievolution” tripe- well Nick just how are YOU defining “evolution”? Or are you going to run away from that question AGAIN?