This is the fourth paper that's critical of the ENCODE hype. The first was Sean Eddy's paper in Current Biology (Eddy, 2012). The second was a paper by Niu and Jiang (2012), and the third was a paper by Graur et al. (2013). In my experience this is unusual since the critiques are all directed at how the ENCODE Consortium interpreted their data and how they misled the scientific community (and the general public) by exaggerating their results. Those kind of criticisms are common in journal clubs and, certainly, in the blogosphere, but scientific journals generally don't publish them. It's okay to refute the data (as in the arsenic affair) but ideas usually get a free pass no matter how stupid they are.

In this case, the ENCODE Consortium did such a bad job of describing their data that journals had to pay attention. (It helps that much of the criticism is directed at Nature and Science because the other journals want to take down the leaders!)

Ford Doolittle makes some of the same points made in the other papers. For example, he points out that the ENCODE definition of "function" is not helpful. However, Ford also does a good job of explaining why the arguments in favor of junk DNA are still valid. He says ...

My aim here is to remind readers of the structure of some earlier arguments in defense of the junk concept (10) that remain compelling, despite the obvious success of ENCODE in mapping the subtle and complex human genomic landscape.

The emphasis is on the "C-Value Paradox" by which he means the tons of work on variation of genome size. The conclusion from all that effort—dating back to the 1960s—was that large genomes contain a huge amount of non-functional DNA, or junk DNA. This DNA is still junk despite the fact that it may bind transcription factors and contain regions of modified chromatin. These sites are called "functional elements" (FE) in the ENCODE papers even though they probably don't have a "function" by any meaningful sense of the word.

Ford proposes a thought experiment based on our understanding of genome sizes. He notes that lungfish have a huge genome (130,000 Mb) while pufferfish (Takifugu) have a much smaller genome (400 Mb) than we do. (Our genome is 3,200 Mb.) He also notes that there are many closely related species of amphibians, plants, and protists, that differ significantly in genome size.

Here's the thought experiment ...

Suppose that there had been (and probably, some day, there will be) ENCODE projects aimed at enumerating, by transcriptional and chromatin mapping, factor footprinting, and so forth, all of the FEs in the genomes of Takifugu and a lungfish, some small and large genomed amphibians (including several species of Plethodon), plants, and various protists. There are, I think, two possible general outcomes of this thought experiment, neither of which would give us clear license to abandon junk.

The first outcome is that all genomes, regardless of size, will have approximately the same number of FEs as the human genome. Since all these species have about the same number of genes, this means that each gene requires a constant number of FEs and it's just lucky that the human genome is almost entirely "functional." There would still have to be lots of junk in species with larger genomes. This result would make it difficult to explain why pufferfish survive with only one eighth as much DNA as humans.

The second outcome of the thought experiment would be that FEs correlate with C-value regardless of complexity. In other words, very similar species will have very different numbers of FEs in spite of the fact that they have the same numbers of genes and the same level of complexity. This is the expected result if FEs are mostly spurious binding sites that have no function but it would be difficult to explain if FEs are really doing something important in regulating gene expression.

The Doolittle thought experiment is similar to Ryan Gregory's Onion Test [see Genome Size, Complexity, and the C-Value Paradox]. In both cases, those who proclaim the death of junk DNA are challenged to explain how their hypothesis is consistent with what we know about variation in genome size. I think it's pretty obvious that the ENCODE leaders haven't thought about the evidence for junk DNA.

There are several other interesting points in Ford Doolittle's paper—most of which I agree with. I want to quote some of them because he says it so much better than I. Here's his critique of the definition of function used by the scientists in the ENCODE Consortium.

A third, and the least reliable, method to infer function is mere existence. The presence of a structure or the occurrence of a process or detectable interaction, especially if complex, is taken as adequate evidence for its being under selection, even when ablation is infeasible and the possibly selectable effect of presence remains unknown. Because our genomes have introns, Alu elements, and endogenous retroviruses, these things must be doing us some good. Because a region is transcribed, its transcript must have some fitness benefit, however remote. Because residue N of protein P is leucine in species A and isoleucine in species B, there must be some selection-based explanation. This approach enshrines "panadaptationism," which was forcefully and effectively debunked by Gould and Lewontin (34) in 1979 but still informs much of molecular and evolutionary genetics, including genomics. As Lynch (39) argues in his essay "The Frailty of Adaptive Hypotheses for the Origins of Adaptive Complexity,"

This narrow view of evolution has become untenable in light of recent observations from genomic sequencing and population genetic theory. Numerous aspects of genomic architecture, gene structure, and developmental pathways are difficult to explain without invoking the nonadaptive forces of genetic drift and mutation. In addition, emergent biological features such as complexity, modularity, and evolvability, allof which are current targets of considerable speculation, may be nothing more than indirect by-products of processes operating at lower levels of organization.

Functional attribution under ENCODE is of this third sort (mere existence) in the main.

Ford is correct. The ENCODE leaders don't seem to be particularly knowledgeable about modern evolutionary theory.

Ford Doolittle makes another point that also came up during my critique of Jonathan Wells' book The Myth of Junk DNA. Wells knows that SOME transposons have secondarily acquired a function so he assumes that ALL transposons must have a function. There are many scientists who fall into the same trap—they find a function for one or two particular features of the genome and then leap to the conclusion that all such features are functional.

Doolittle uses the example of lncRNAs (long non-coding RNAs). A very small number of them have been shown to have a function but that does not mean that all of them do. Ford's point is that the way science operates it is guaranteed that someone will find a function for at least one lncRNA because biology is messy.

Regulation, defined in this loose way, is, for instance, the assumed function of many or most lncRNAs, at least for some authors (6,7,44,45). However, the transcriptional machinery will inevitably make errors: accuracy is expensive, and the selective cost of discriminating against all false promoters will be too great to bear. There will be lncRNAs with promoters that have arisen through drift and exist only as noise (46). Similarly, binding to proteins and other RNAs is something that RNAs do. It is inevitable that some such interactions, initially fortuitous, will come to be genuinely regulatory, either through positive selection or the neutral process described below as constructive neutral evolution (CNE). However, there is no evolutionary force requiring that all or even most do. At another (sociology of science) level, it is inevitable that molecular biologists will search for and discover some of those possibly quite few instances in which function has evolved and argue that the function of lncRNAs as a class of elements has, at last, been discovered. The positivist, verificationist bias of contemporary science and the politics of its funding ensure this outcome.

What he didn't say—probably because he's too kind—is that these same pressures (pressure to publish and pressure to get funding) probably lead to incorrect claims of function.

It should be obvious to all of you that Ford Doolittle expects that the outcome of the thought experiment will be that "functional elements" (i.e. binding sites etc.) will correlate with genome size. This means that FEs aren't really functional at all—they are part of the junk. Does it make sense that our genome is full of nonfunctional bits of DNA? Yes, it does, especially if the spurious binding sites are neutral. It even makes sense if they are slightly delterious because Doolittle understands Michael Lynch's arguments for the evolution of nonadaptive complexity.

Assuming these predictions are borne out, what might we make of it? Lynch (39) suggests that much of the genomic- and systems-level complexity of eukaryotes vis à vis prokaryotes is maladaptive, reflecting the inability of selection to block fixation of incrementally but mildly deleterious mutations in the smaller populations of the former.

It's clear that the ENCODE leaders don't think like this. So, what motivates them to propose that our genome is full of regulatory sites and molecules when there seems to be a more obvious explanation of their data? Doolittle has a suggestion ...

[A fourth misconception] may be a seldom-articulated or questioned notion that cellular complexity is adaptive, the product of positive selection at the organismal level. Our disappointment that humans do not have many more genes than fruit flies or nematodes has been assuaged by evidence that regulatory mechanisms that mediate those genes’ phenotypic expressions are more various, subtle, and sophisticated (57), evidence of the sort that ENCODE seems to vastly augment. Yet there are nonselective mechanisms, such as [constructive neutral evolution], that could result in the scaling of FEs as ENCODE defines them to C-value nonadaptively or might be seen as selective at some level higher or lower than the level of individual organisms. Splits within the discipline between panadaptationists/neutralists and those researchers accepting or doubting the importance of multilevel selection fuel this controversy and others in biology.

I agree. Part of the problem is adaptationism and the fact that many biochemists and molecular biologists don't understand modern concepts in evolution. And part of the problem is The Deflated Ego Problem.

It's a mistake to think that this debate is simply about how you define function. That seems to be the excuse that the ENCODE leaders are making in light of these attacks. Here's how Ford Doolittle explains it ...

In the end, of course, there is no experimentally ascertainable truth of these definitional matters other than the truth that many of the most heated arguments in biology are not about facts at all but rather about the words that we use to describe what we think the facts might be. However, that the debate is in the end about the meaning of words does not mean that there are not crucial differences in our understanding of the evolutionary process hidden beneath the rhetoric.

This reminds me of something that Stephen J. Gould said in Darwinism and the Expansion of Evolutionary Theory (Gould, 1982).

The world is not inhabited exclusively by fools and when a subject arouses intense interest and debate, as this one has, something other than semantics is usually at stake.

20 comments:

Lynch is widely quoted these days, but I find myself sceptical of the primacy of population size as a significant driver of the C-value differential between eukaryotes and prokaryotes. What does 'population size' even mean in a mixed collection of clonal organisms? The individual bacterium has severe local constraints, and they arise from its means of making a living, not least of which are bounding within a size-limiting energy-generating outer membrane and being surrounded by cousins.

Endosymbionts, relaxation of nutritional limits, cytoskeletons, multiple origins and sex are, IMO, much more significant than Ne with respect to the adaptive landscape traversed by the respective organisms wrt junk.

I agree. Population size is probably not a good explanation for the prokaryote/eukaryote differences in C-value. I think it's mostly energetics; eukaryotes have a substantial surplus of energy due to mithocondria that prokaryotes don't and that enables the former to "ignore" to a certain extent the energy cost in replicating "junk". Energetic surplus is very probably the reason why eukaryotic cells achieved the level of complexity and multicellular cell specialization they have.

I do think, however, that population size is certainly important in fixation of neutral or quasi-neutral mutations, regardless of the organisms being clonal or not. But that probably has nothing to do with C-values anyway.

I don't find it convincing. All one would need to do to eliminate the junk in a genome is to make the population bigger?

It is true that scaling up a population's number will render elimination of deleterious alleles more likely, but there are a couple of unproven assumptions in the most naive model. Do we know, for instance, what the selective coefficient of N bits of junk 'typically' is? It would have to be in a particular band to ensure that it could not be eliminated by a population of N individuals, but could be by a population a couple of orders of magnitude higher. Do we know that it is? There is also the issue of scaling. If you argue with N as the only variable, you have to do something to ensure that the new N is stirred with the same efficiency as the old, and typically it isn't (and in prokaryotes, the absence of mate search means that a significant vector is completely absent).

The explanation of junk DNA has nothing to do with adaptive lanscapes.That's the whole point of nonadaptive evolution of complexity.

There is nothing to say an adaptive landscape cannot be flat! As far as 'surplus' DNA is concerned, prokaryotes are on peaks. For eukaryotes, the landscape is much flatter - then population size may start to exert an effect. But there are many more potent mechanistic biases than that.

Yes, I'd agree energetics is significant, but also basic 'nutrition' - one has to get the building blocks for all this surplus, and consumption is far more productive than absorption. As far as multicellulars go, I think their main cost is that multicellular body, and unless nutrition is severely limiting in general, the cost of being-multicellular-with-junk imposes a negligible increase.

All one would need to do to eliminate the junk in a genome is to make the population bigger?

That doesn't necessarily follow but it does explain why most single-cell eukaryotes have genomes that are much smaller than those of multicellular species. Yeast (a fungus), for example, has a genome that's the same size as those in some bacteria.

If you argue with N as the only variable, you have to do something to ensure that the new N is stirred with the same efficiency as the old, and typically it isn't (and in prokaryotes, the absence of mate search means that a significant vector is completely absent).

I suspect that you haven't read Lynch's book. He discusses all sorts of other variables that can affect genome size. The main ones are mutation rates—especially the difference between deletions and insertions, generation times, recombination, and body size. He discusses whether these are correlated with population size and, if so, how. He also spends a lot of time on effective populations sizes and evolution in species with many subpopulations.

Ameobas have one special feature, which has to be taken into account when Ne is discussed and it is asexuality.

Of course, nobody has sequenced those genomes and nobody will until super long-read sequencing becomes available but I would bet they're full of out-of-control transposons, probably in combination with some serious polyploidy.

And that would be consistent with the theory. Which, BTW, does not say that Ne is the only thing that matters - all sorts of details about the biology of the species, the mutation rates and patterns, etc. do matter and it is not an absolute relationship as a result, only a general pattern: small Ne => big genomes, full of transposons, large Ne => small genomes, few transposons, fewer introns, etc.

Correct, I should perhaps read more before pontificating. Nonetheless, the difficulty I see is in establishing that it is Ne itself that is the factor at work, rather than something of which change in Ne is an inevitable corollary.

Multicellularity, for example. The principal cost of extra DNA is in building the soma. Such somas inevitably reduce the number of germ line cells that a given niche can support. But such somas generally pay their way in germ cell survival, with a bit to spare. Unless nutrition is severely and consistently limiting, junk-generating mutations (up to a point) may simply not be deleterious, for any Ne. For a population to become large, nutritional limitation - a significant determinant of 'detriment' - is less likely to be operational.

I realise that nutrition is not the sole 'cost' of junk, and that other factors, such as meiotic misalignment, may come into play. Which may help explain the higher rate of transposons in asexual eukaryotes.

I am not sure if I understand you correctly, but if I do, you are in effect arguing for the "Ne determines the strength of selection" position while claiming to oppose it.

Yes, the mere existence of junk DNA has in terms of nutrition a very slight negative effect. That means the absolute value of the selective coefficient is very low. Accordingly, 1/4Ne >> s and it becomes an effectively neutral mutation and selection cannot get rid of it. That's the point.

Yes, I think you may be misunderstanding me. I am recognising the argument, but questioning its force.

For any imaginary small value of s, there is a threshold value of Ne below which selection is ineffective. The question would be: how robust is the assumption that s for junk increments is generally in the range that enables this relationship to have causal power? It may indeed be a factor (allowing certain assumptions about the efficiency with which real populations are stirred and 'sampled') but a dominant factor, I'm less convinced.

The null hypothesis would be that s=0, and Ne doesn't matter. Adaptationists are criticised for assuming s is large, but here we have a similar assumption - s is nonzero and within a particular range.

Obviously, the situation here is rather complex, because the detriment of a given increment of junk is highly contingent. s depends how much more or less efficient than the rest of the population it makes its bearers, and this will vary depending on how big it is and what other increments are around in the population at the time.

I accept the approximate correlation, and the possibility that Ne may be one factor, but there are significant mechanistic factors relating to the different types of organism that themselves affect both the dynamics of junk, and Ne. It may be these, rather than probabilistic effects, that provide the causation.

So if I now understand you correctly, are you arguing that the selection coefficient of new TE insertions is always 0 so Ne does not play a role. That's not true. It is clearly not 0 because it was, there would not be such elaborate defense mechanisms trying to prevent new insertions and TEs would not be so rare in organisms with very large population sizes.

So if I now understand you correctly, are you arguing that the selection coefficient of new TE insertions is always 0 so Ne does not play a role.

Nope - that was my 'null hypothesis' :) I'm certainly not saying "s always = 0". I was arguing particularly on the nutritional cost assumption, and was looking for some justification of the assumption that that cost is sufficient for natural Ne differentials, and their effect on selective response, to provide 'the' explanation for patterns of junk.

I granted that other mechanisms act against junk. In particular, a virulent transposon clearly does damage to genes, causes meiotic misalignments, and ups the genetic load at a rate potentially far in excess of that which can be absorbed by the below-threshold differentials of less active insertions. Then, of course, real and potentially large selection coefficients leap into action. No argument here.

I'm interested in "TEs would not be so rare in organisms with very large population sizes.", however. What organisms are we talking about here? I am generally arguing for comparing like with like, so I'd hope this was not across a major divide such as pro/eukaryote or uni/multicellular.

Uni/multicellular is not a large divide - there is nothing fundamentally different about unicellular eukaryotes, it's just our historic multicellular bias that creates the division. Multicellularity has developed multiple times in multiple lineages, independently.

Most unicellular eukaryotes have large Ne, small genomes, with few TEs, and few introns. Similarly, smaller multicellular eukaryotes have more compressed genomes than mammals (but bigger than those of protists) and fewer TEs - flies are a perfect example and in their case TEs have the added bonus of being younger and more active, i.e. the old ones have been purged already.

As I said, it's not a perfect relationship, it's not expected to be, but it exists.

My point about the unicellular/multicellular divide relates to a couple of mechanistic factors that may have a bearing - one is that the 'nutrition' cost is principally borne by the extra DNA in somatic cells, not by a combined somatic/germ cell, another is that generation times tend to be extended by the existence of that phase, increasing the time available for replication.

But another is that the germ line DNA is encapsulated. There is less constraint to gather the materials for life in close contact with an unforgiving medium - the unit cost of germline DNA goes down; the soma pays for itself and then some.

Prokaryotes are most severely constrained, as they are tiny, energetically limited, in direct molecular competition with relatives, must separate sister chromosomes by cell wall growth, etc.

Single-celled eukaryotes are about 10,000 times bigger, with cytokinetic manipulation, multiple origins of replication, food engulfment, storage and/or a large energetic surface. This reduces the restraint on genome expansion.

Multicellular eukaryotes can sit in their somatic cloak, indulging a life of leisure, with cellular specialisation the payoff for elaboration, and further freeloading DNA the cost. If you are indulging the cost of a soma, you can better afford a bit of surplus DNA. Unless you fly, of course.

In tandem with this series goes an inevitable reduction in Ne, so naturally the correlation holds.

I dunno - just being pedantic, perhaps - but I likes a mechanistic explanation meself!

Many biologists have called the 80% figure more a publicity stunt than a statement of scientific fact. Nevertheless, ENCODE leaders say, the data resources that they have provided have been immensely popular. So far, papers that use the data have outnumbered those that take aim at the definition of function.

Currently, it is 400 authors and 30+ papers vs. four authors and four papers and 200,000,000.00 vs. 0.00$.

The debate sounds like a matter of definitional differences. But to dismiss it as semantics minimizes the importance of words and definitions, and of how they are used to engage in research and to communicate findings. ENCODE continues to collect data and to characterize what the 3.2 billion base pairs might be doing in our genome and whether that activity is important. If a better word than ‘function’ is needed to describe those activities, so be it. Suggestions on a postcard please.

There is exactly one English word for most of these activities. It is real and you can measure it: noise.

Has anyone on either side of this debate removed the useless 80% of the human genome and recorded what the results where?

"Ford is correct. The ENCODE leaders don't seem to be particularly knowledgeable about modern evolutionary theory."

Why would they need to have a preconceived idea of the evolution theory to examine the function of something? Are they trying to confirm the paradigm, or find out what the genome does? This seems to me to be the problem with interpreting what is being looked at. Kind of like saying "I wouldn't have seen it if I didn't believe it."

Recent Comments

Principles of Biochemistry 5th edition

Disclaimer

Some readers of this blog may be under the impression that my personal opinions represent the official position of Canada, the Province of Ontario, the City of Toronto, the University of Toronto, the Faculty of Medicine, or the Department of Biochemistry. All of these institutions, plus every single one of my colleagues, students, friends, and relatives, want you to know that I do not speak for them. You should also know that they don't speak for me.

Superstition

Quotations

The old argument of design in nature, as given by Paley, which formerlyseemed to me to be so conclusive, fails, now that the law of natural selection has been discovered. We can no longer argue that, for instance, the beautiful hinge of a bivalve shell must have been made by an intelligent being, like the hinge of a door by man. There seems to be no more design in the variability of organic beings and in the action of natural selection, than in the course which the wind blows.

Charles Darwin (c1880)Although I am fully convinced of the truth of the views given in this volume, I by no means expect to convince experienced naturalists whose minds are stocked with a multitude of facts all viewed, during a long course of years, from a point of view directly opposite to mine. It is so easy to hide our ignorance under such expressions as "plan of creation," "unity of design," etc., and to think that we give an explanation when we only restate a fact. Any one whose disposition leads him to attach more weight to unexplained difficulties than to the explanation of a certain number of facts will certainly reject the theory.

Charles Darwin (1859)Science reveals where religion conceals. Where religion purports to explain, it actually resorts to tautology. To assert that "God did it" is no more than an admission of ignorance dressed deceitfully as an explanation...

Quotations

I have championed contingency, and will continue to do so, because its large realm and legitimate claims have been so poorly attended by evolutionary scientists who cannot discern the beat of this different drummer while their brains and ears remain tuned to only the sounds of general theory.

The essence of Darwinism lies in its claim that natural selection creates the fit. Variation is ubiquitous and random in direction. It supplies raw material only. Natural selection directs the course of evolutionary change.

Rudyard Kipling asked how the leopard got its spots, the rhino its wrinkled skin. He called his answers "just-so stories." When evolutionists try to explain form and behavior, they also tell just-so stories—and the agent is natural selection. Virtuosity in invention replaces testability as the criterion for acceptance.

The first commandment for all versions of NOMA might be summarized by stating: "Thou shalt not mix the magisteria by claiming that God directly ordains important events in the history of nature by special interference knowable only through revelation and not accessible to science." In common parlance, we refer to such special interference as "miracle"—operationally defined as a unique and temporary suspension of natural law to reorder the facts of nature by divine fiat.

Quotations

My own view is that conclusions about the evolution of human behavior should be based on research at least as rigorous as that used in studying nonhuman animals. And if you read the animal behavior journals, you'll see that this requirement sets the bar pretty high, so that many assertions about evolutionary psychology sink without a trace.

Jerry Coyne
Why Evolution Is TrueI once made the remark that two things disappeared in 1990: one was communism, the other was biochemistry and that only one of them should be allowed to come back.

Sydney Brenner
TIBS Dec. 2000
It is naïve to think that if a species' environment changes the species must adapt or else become extinct.... Just as a changed environment need not set in motion selection for new adaptations, new adaptations may evolve in an unchanging environment if new mutations arise that are superior to any pre-existing variations

Douglas Futuyma
One of the most frightening things in the Western world, and in this country in particular, is the number of people who believe in things that are scientifically false. If someone tells me that the earth is less than 10,000 years old, in my opinion he should see a psychiatrist.

Francis Crick
There will be no difficulty in computers being adapted to biology. There will be luddites. But they will be buried.

Sydney Brenner
An atheist before Darwin could have said, following Hume: 'I have no explanation for complex biological design. All I know is that God isn't a good explanation, so we must wait and hope that somebody comes up with a better one.' I can't help feeling that such a position, though logically sound, would have left one feeling pretty unsatisfied, and that although atheism might have been logically tenable before Darwin, Darwin made it possible to be an intellectually fulfilled atheist

Richard Dawkins
Another curious aspect of the theory of evolution is that everybody thinks he understand it. I mean philosophers, social scientists, and so on. While in fact very few people understand it, actually as it stands, even as it stood when Darwin expressed it, and even less as we now may be able to understand it in biology.

Jacques Monod
The false view of evolution as a process of global optimizing has been applied literally by engineers who, taken in by a mistaken metaphor, have attempted to find globally optimal solutions to design problems by writing programs that model evolution by natural selection.