Microbial genetic diversity is often investigated via the comparison of relatively similar 16S molecules through multiple alignments between reference sequences and novel environmental samples using phylogenetic trees, direct BLAST matches, or phylotypes counts. However, are we missing novel lineages in the microbial dark universe by relying on standard phylogenetic and BLAST methods?If so, how can we probe that universe using alternative approaches? We performed a novel type of multi-marker analysis of genetic diversity exploiting the topology of inclusive sequence similarity networks.

Results

Our protocol identified 86 ancient gene families, well distributed and rarely transferred across the 3 domains of life, and retrieved their environmental homologs among 10 million predicted ORFs from human gut samples and other metagenomic projects. Numerous highly divergent environmental homologs were observed in gut samples, although the most divergent genes were over-represented in non-gut environments. In our networks, most divergent environmental genes grouped exclusively with uncultured relatives, in maximal cliques. Sequences within these groups were under strong purifying selection and presented a range of genetic variation comparable to that of a prokaryotic domain.

Conclusions

Many genes families included environmental homologs that were highly divergent from cultured homologs: in 79 gene families (including 18 ribosomal proteins), Bacteria and Archaea were less divergent than some groups of environmental sequences were to any cultured or viral homologs. Moreover, some groups of environmental homologs branched very deeply in phylogenetic trees of life, when they were not too divergent to be aligned. These results underline how limited our understanding of the most diverse elements of the microbial world remains, and encourage a deeper exploration of natural communities and their genetic resources, hinting at the possibility that still unknown yet major divisions of life have yet to be discovered.

Reviewers

This article was reviewed by Eugene Koonin, William Martin and James McInerney.

•Cell divisions are unequal and cell growth is heterogeneous in the meristem

•Simulations indicate that growth and cell cycle are coordinated in individual cells

•Meristem cell sizes are rapidly corrected after perturbation

•Abnormal cell sizes do not affect growth but perturb organ boundaries and emergence

Summary

How cells regulate their dimensions is a long-standing question [ 1, 2 ]. In fission and budding yeast, cell-cycle progression depends on cell size, although it is still unclear how size is assessed [ 3–5 ]. In animals, it has been suggested that cell size is modulated primarily by the balance of external signals controlling growth and the cell cycle [ 1 ], although there is evidence of cell-autonomous control in cell cultures [ 6–9 ]. Regardless of whether regulation is external or cell autonomous, the role of cell-size control in the development of multicellular organisms remains unclear. Plants are a convenient system to study this question: the shoot meristem, which continuously provides new cells to form new organs, maintains a population of actively dividing and characteristically small cells for extended periods [ 10 ]. Here, we used live imaging and quantitative, 4D image analysis to measure the sources of cell-size variability in the meristem and then used these measurements in computer simulations to show that the uniform cell sizes seen in the meristem likely require coordinated control of cell growth and cell cycle in individual cells. A genetically induced transient increase in cell size was quickly corrected by more frequent cell division, showing that the cell cycle was adjusted to maintain cell-size homeostasis. Genetically altered cell sizes had little effect on tissue growth but perturbed the establishment of organ boundaries and the emergence of organ primordia. We conclude that meristem cells actively control their sizes to achieve the resolution required to pattern small-scale structures.

The neutral theory of molecular evolution predicts that the amount of neutral polymorphisms within a species will increase proportionally with the census population size (Nc). However, this prediction has not been borne out in practice: while the range of Nc spans many orders of magnitude, levels of genetic diversity within species fall in a comparatively narrow range. Although theoretical arguments have invoked the increased efficacy of natural selection in larger populations to explain this discrepancy, few direct empirical tests of this hypothesis have been conducted. In this work, we provide a direct test of this hypothesis using population genomic data from a wide range of taxonomically diverse species. To do this, we relied on the fact that the impact of natural selection on linked neutral diversity depends on the local recombinational environment. In regions of relatively low recombination, selected variants affect more neutral sites through linkage, and the resulting correlation between recombination and polymorphism allows a quantitative assessment of the magnitude of the impact of selection on linked neutral diversity. By comparing whole genome polymorphism data and genetic maps using a coalescent modeling framework, we estimate the degree to which natural selection reduces linked neutral diversity for 40 species of obligately sexual eukaryotes. We then show that the magnitude of the impact of natural selection is positively correlated with Nc, based on body size and species range as proxies for census population size. These results demonstrate that natural selection removes more variation at linked neutral sites in species with large Nc than those with small Nc and provides direct empirical evidence that natural selection constrains levels of neutral genetic diversity across many species. This implies that natural selection may provide an explanation for this longstanding paradox of population genetics.

Author Summary

A fundamental goal of population genetics is to understand why levels of genetic diversity vary among species and populations. Under the assumptions of the neutral model of molecular evolution, the amount of variation present in a population should be directly proportional to the size of the population. However, this prediction does not tally with real-life observations: levels of genetic diversity are found to be substantially more uniform, even among species with widely differing population sizes, than expected. Because natural selection—which removes genetically linked neutral variation—is more efficient in larger populations, selection on novel mutations offers a potential reconciliation of this paradox. In this work, we align and jointly analyze whole genome genetic variation data from a wide variety of species. Using this dataset and population genetic models of the impact of selection on neutral variation, we test the prediction that selection will disproportionally remove neutral variation in species with large population sizes. We show that genomic signature of natural selection is pervasive across most species, and that the amount of linked neutral variation removed by selection correlates with proxies for population size. We propose that pervasive natural selection constrains neutral diversity and provides an explanation for why neutral diversity does not scale as expected with population size.

Funding: This work was supported in part by National Institute of Health grants R01GM084236, AI099105 and AI106734 to DLH. During this work, RBCD was supported by Harvard Prize Graduate Fellowship and a UCB Chancellor’s Postdoctoral Fellowship. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

•Molecular clock analysis indicates an ancient origin of animals in the Cryogenian

•Diversification into animal phyla occurred in the Ediacaran, before the Cambrian

•Uncertainties in the fossil record and the molecular clock affect time estimates

•A precise timeline of animal evolution cannot be obtained with current methods

Summary

The timing of divergences among metazoan lineages is integral to understanding the processes of animal evolution, placing the biological events of species divergences into the correct geological timeframe. Recent fossil discoveries and molecular clock dating studies have suggested a divergence of bilaterian phyla >100 million years before the Cambrian, when the first definite crown-bilaterian fossils occur. Most previous molecular clock dating studies, however, have suffered from limited data and biases in methodologies, and virtually all have failed to acknowledge the large uncertainties associated with the fossil record of early animals, leading to inconsistent estimates among studies. Here we use an unprecedented amount of molecular data, combined with four fossil calibration strategies (reflecting disparate and controversial interpretations of the metazoan fossil record) to obtain Bayesian estimates of metazoan divergence times. Our results indicate that the uncertain nature of ancient fossils and violations of the molecular clock impose a limit on the precision that can be achieved in estimates of ancient molecular timescales. For example, although we can assert that crown Metazoa originated during the Cryogenian (with most crown-bilaterian phyla diversifying during the Ediacaran), it is not possible with current data to pinpoint the divergence events with sufficient accuracy to test for correlations between geological and biological events in the history of animals. Although a Cryogenian origin of crown Metazoa agrees with current geological interpretations, the divergence dates of the bilaterians remain controversial. Thus, attempts to build evolutionary narratives of early animal evolution based on molecular clock timescales appear to be premature.

Dubitable Darwin? Why Some Smart, Nonreligious People Doubt the Theory of Evolution

By John Horgan | July 6, 2010 |

Last year, on the 150th anniversary of the publication of Origin of Species, Darwin's stock soared higher than Apple's. It's 2010—time for a market adjustment.

The philosopher Daniel Dennett once called the theory of evolution by natural selection "the single best idea anyone has ever had." I'm inclined to agree. But Darwinism sticks in the craw of some really smart people. I don't mean intelligent-designers (aka IDiots) and other religious ignorami but knowledgeable scientists and scholars.

Take, for example, the philosopher Jerry Fodor of Rutgers University and the cognitive scientist Massimo Piattelli-Palmarini of the University of Arizona in Tucson. In What Darwin Got Wrong (Farrar, Straus and Giroux, 2010), these self-described atheists argue that the theory of natural selection is "fatally flawed." Their book, which I reviewed for The Philadelphia Inquirer, is, well, fatally flawed. For example, they air familiar debates over how large a role contingency plays in evolution; whether natural selection operates primarily at the level of genes; why certain clusters of genes persist unchanged for eons. Fodor and Piattelli-Palmarini wrap up the discussion of each debate with the same kicker: natural selection must be wrong.

But saying debates over contingency, levels of selection and gene conservation disprove evolutionary theory is like saying debates over the formation of Saturn's rings disprove heliocentrism. If you're going to shoot the king, the old saying goes, you had better kill him. Fodor and Piattelli-Palmarini don't even wound Darwin. What Darwin Got Wrong nonetheless serves as a useful reminder of more coherent complaints about natural selection.

I lump Darwin's secular critics into two camps: Some, such as the left-leaning biologists Stephen Jay Gould and Richard Lewontin (who are cited by Fodor and Piattelli-Palmarini), fear the political implications of Darwinian theory. If we accept evolutionary explanations of human nature, they suggest, we may come to believe that many insidious modern "-isms"—unbridled capitalism, racism, sexism and militarism—were highly probable outcomes of evolution and thus not easily subject to change. Given how genetic theories have been employed in the past, these concerns have merit.

Other critics object to Darwinism for precisely the opposite reason. They fear that evolutionary theory, even when buttressed by modern genetics and molecular biology, does not make reality probable enough. Reality seems too precarious, too much a product of blind luck. No one has worked harder to solve the improbability problem than the biologist Richard Dawkins. Ironically, Dawkins has also revealed how deep and possibly intractable the problem is.

In Climbing Mount Improbable (W. W. Norton, 1997) Dawkins emphasizes that the vast majority of variants of a given species fail to propagate; there are many more ways to be a loser in the game of life than to be a success. Surely that is true of life as a whole. Of all the imaginable possible histories of life, what is the likelihood that it would persist for billions of years, long enough to produce toads, baboons and Glenn Beck?

Dawkins also notes that "nature, unlike humans with brains, has no foresight." Each individual organism pursues its short-term interests regardless of the long-term consequences for life as a whole or even for other members of the species. Given this fact, it is all too easy to imagine scenarios in which one species—a bacterium or virus, perhaps—runs amok and destroys all life on Earth.

If our past was improbable, our future might be as well. Recognizing this implication of evolutionary theory, some scientists have proposed alternative mechanisms to make life more robust. For example, biochemists such as Ilya Prigogine and Stuart Kauffman (cited by Fodor and Piattelli-Palmarini) have postulated "self-organization" forces that made the origin of life and its subsequent history highly probable.

Other theorists have proposed that natural selection may favor not just genes or individuals but populations, species, even entire ecosystems. The most extreme version of this group-selection concept is Gaia theory, which holds that all of life somehow conspires to ensure its continued survival. Self-organization and Gaia are flawed theories that have won few adherents, but that doesn't mean that the problem they address doesn't exist.

Early in his career, the philosopher Karl Popper (yes, cited by F and P-P) called evolution via natural selection "almost a tautology" and "not a testable scientific theory but a metaphysical research program." Attacked for these criticisms, Popper took them back. But when I interviewed him in 1992, he blurted out that he still found Darwin's theory dissatisfying. "One ought to look for alternatives!" Popper exclaimed, banging his kitchen table.

Is it possible that some future genius will discover an alternative that supplants Darwinism as our framework for understanding life? Will we ever look back on Darwin as brilliant but wrong?

Postscript: I'd like to thank my buddy Robert Hutchinson—author, editor, polymath, punster, triathlete—for suggesting that I call this blog "Cross-check". A cross-check is an illegal hit in hockey. I don't cross-check on the ice, but on this blog anything goes.

We present the first measurements of the abundances of α-elements (Mg, Si, and S) extending out to beyond the virial radius of a cluster of galaxies. Our results, based on Suzaku Key Project observations of the Virgo Cluster, show that the chemical composition of the intra-cluster medium is consistent with being constant on large scales, with a flat distribution of the Si/Fe, S/Fe, and Mg/Fe ratios as a function of radius and azimuth out to 1.4 Mpc (1.3 r200). Chemical enrichment of the intergalactic medium due solely to core collapse supernovae (SNcc) is excluded with very high significance; instead, the measured metal abundance ratios are generally consistent with the Solar value. The uniform metal abundance ratios observed today are likely the result of an early phase of enrichment and mixing, with both SNcc and type Ia supernovae (SNIa) contributing to the metal budget during the period of peak star formation activity at redshifts of 2-3. We estimate the ratio between the number of SNIa and the total number of supernovae enriching the intergalactic medium to be between 12-37%, broadly consistent with the metal abundance patterns in our own Galaxy or with the SNIa contribution estimated for the cluster cores.

Science is increasingly integral to public life. One can hardly avoid taking positions on a range of scientific matters, from climate change, genetically modified foods, genetic testing, and pharmaceuticals, to disease control, patient care, stem cells, and data analytics. Yet most citizens and lawmakers lack the skills or background needed to grasp the underlying technical issues. Scientists are thus guardians of knowledge—however mundane—beyond the reach of average citizens.

This puts the layman in a rather awkward position, for scientists are fickle guardians.

On the one hand, they are fiercely loyal to their knowledge claims. They simply assume that the experimental method is the best way to understand the natural world—and sometimes the only way to understand anything. And they advance their conclusions with a degree of confidence that most other intellectuals can only envy. Thus the layman is reluctant to dismiss or criticize scientific findings, for to do so would require either possessing a similar facility with the scientific method (unlikely) or rejecting that method (unwise).

On the other hand, scientists can be surprisingly disloyal to their knowledge claims, abandoning them as soon as better ones come along. Sometimes they even abandon entire worldviews. The speed of light; the nature of matter; the indivisibility of the atom; the age of the cosmos; the categorizations of the planets; of dinosaurs; the geocentric universe; the clockwork universe—all of these have been or are subject to revision. In science, almost nothing is sacred. Émile Meyerson, the early twentieth-century philosopher of science who influenced Thomas Kuhn, described this situation well:

On the one hand, he sees that scientists demand for their conclusions an authority that is nearly absolute: ... But on the other hand, their pronouncements... clearly display no fixity whatsoever in their eyes... scientists will abandon without scruple an entire system that had, until recently, seemed certain to them. This about face accomplished, they treat as futile ... any attempt to return to the old notions that science and humanity had for a long time seen as infallible expressions of truth—precisely because they obeyed the injunctions of scientists themselves.

The problem is that science is both dogmatic and skeptical—or rather, neither fully dogmatic nor fully skeptical—a bewildering characteristic that allows science to advance. But the disfiguring lenses of popular journalism and political debate transform this healthy tension into an untenable disjunction. On the one hand, we are told: “The science is settled!” Question not. On the other: “Science is never settled!” Question all. Depending on the issue, say, climate change or GMOs, politicians and pundits on the left or right will opportunistically appeal to one or the other.

Purveyors of “settled science” implicitly offer a picture of the scientific community as inviolable dispensers of knowledge. Knowledge here is a product to be consumed by the lay public. Questioning scientific findings, their alleged certitude, or policy implications is thus tantamount to rejecting the product wholesale. Disparaging analogies with “Flat-Earthers,” even Holocaust deniers, follow close behind.

Sensing something awry with such dogmatic invocations and the simplistic image of scientific inquiry they presuppose, some go to the other extreme, radically and skeptically questioning all scientific authority. They maintain “settled science” is a myth. Invocations of Galileo’s audacious pursuit of truth in the face of Church strictures follow close behind. Sometimes such skepticism turns into cynicism about the scientific enterprise itself.

Start with the skeptics. Scientists patently do not question everything: they are not, in general, skeptical about our capacity to gain knowledge about the natural world or about the reliability of the scientific method. Can we trust our senses? Can we be sure that our scientific theories correspond to reality? Do we ever have sufficient justification for our beliefs? Scientists dogmatically assume affirmative answers to these ancient philosophical questions, believing knowledge is possible and that their methods and standards of evidence are sufficient to achieve it. That’s not skepticism; and we are all the better for it—or at any rate, more knowledgeable. For thoroughgoing skepticism, see Sextus Empiricus, not Physics 101.

"Empirical reasoning is inductive, advancing from observations to probabilistic generalizations, rather than deductive, moving from universal truths to necessary conclusions. When scientists—or, more often, pundits—express absolute certainty about scientific theories, they go far beyond what any empirical evidence ever warrants. If scientific laws are empirically confirmed, then, no matter how robust, they can be—in principle—empirically disconfirmed. To say otherwise is disingenuous."

Comentário de um amigo:

This is an interesting mix of logical empiricism and falsifiability. Popper would emphatically disagree with this statement. Induction was verification's ally until Popper introduced falsifiability and became the enemy of certainty. According to Popper, nothing is inductively confirmed, no matter how probable the case.

It’s probably best to get the bad news out of the way first. The so-called scientific method is a myth. That is not to say that scientists don’t do things that can be described and are unique to their fields of study. But to squeeze a diverse set of practices that span cultural anthropology, paleobotany, and theoretical physics into a handful of steps is an inevitable distortion and, to be blunt, displays a serious poverty of imagination. Easy to grasp, pocket-guide versions of the scientific method usually reduce to critical thinking, checking facts, or letting “nature speak for itself,” none of which is really all that uniquely scientific. If typical formulations were accurate, the only location true science would be taking place in would be grade-school classrooms.

Source/Fonte: Shutterstock

Scratch the surface of the scientific method and the messiness spills out. Even simplistic versions vary from three steps to eleven. Some start with hypothesis, others with observation. Some include imagination. Others confine themselves to facts. Question a simple linear recipe and the real fun begins. A website called Understanding Science offers an “interactive representation” of the scientific method that at first looks familiar. It includes circles labeled “Exploration and Discovery” and “Testing Ideas.” But there are others named “Benefits and Outcomes” and “Community Analysis and Feedback,” both rare birds in the world of the scientific method. To make matters worse, arrows point every which way. Mouse over each circle and you find another flowchart with multiple categories and a tangle of additional arrows.

It’s also telling where invocations of the scientific method usually appear. A broadly conceived method receives virtually no attention in scientific papers or specialized postsecondary scientific training. The more “internal” a discussion — that is, the more insulated from nonscientists —the more likely it is to involve procedures, protocols, or techniques of interest to close colleagues.

Meanwhile, the notion of a heavily abstracted scientific method has pulled public discussion of science into its orbit, like a rhetorical black hole. Educators, scientists, advertisers, popularizers, and journalists have all appealed to it. Its invocation has become routine in debates about topics that draw lay attention, from global warming to intelligent design. Standard formulations of the scientific method are important only insofar as nonscientists believe in them.

The Bright Side

Now for the good news. The scientific method is nothing but a piece of rhetoric. Granted, that may not appear to be good news at first, but it actually is. The scientific method as rhetoric is far more complex, interesting, and revealing than it is as a direct reflection of the ways scientists work. Rhetoric is not just words; rather, “just” words are powerful tools to help shape perception, manage the flow of resources and authority, and make certain kinds of actions or beliefs possible or impossible. That’s particularly true of what Raymond Williams called “keywords.” A list of modern-day keywords include “family,” “race,” “freedom,” and “science.” Such words are familiar, repeated again and again until it seems that everyone must know what they mean. At the same time, scratch their surface, and their meanings become full of messiness, variation, and contradiction.

Sound familiar? Scientific method is a keyword (or phrase) that has helped generations of people make sense of what science was, even if there was no clear agreement about its precise meaning— especially if there was no clear agreement about its precise meaning. The term could roll off the tongue and be met by heads nodding in knowing assent, and yet there could be a different conception within each mind. As long as no one asked too many questions, the flexibility of the term could be a force of cohesion and a tool for inspiring action among groups. A word with too exact a definition is brittle; its use will be limited to specific circumstances. A word too loosely defined will create confusion and appear to say nothing. A word balanced just so between precision and vagueness can change the world.

The Scientific Method, a Historical Perspective

This has been true of the scientific method for some time. As early as 1874, British economist Stanley Jevons (1835–1882) commented in his widely noted Principles of Science, “Physicists speak familiarly of scientific method, but they could not readily describe what they mean by that expression.” Half a century later, sociologist Stuart Rice (1889–1969) attempted an “inductive examination” of the definitions of the scientific method offered in social scientific literature. Ultimately, he complained about its “futility.” “The number of items in such an enumeration,” he wrote, “would be infinitely large.”

And yet the wide variation in possible meanings has made the scientific method a valuable rhetorical resource. Methodological pictures painted by practicing scientists have often been tailored to support their own position and undercut that of their adversaries, even if inconsistency results. As rhetoric, the scientific method has performed at least three functions: it has been a tool of boundary work, a bridge between the scientific and lay worlds, and a brand that represents science itself. It has typically fulfilled all these roles at once, but they also represent a rough chronology of its use. Early in the term’s history, the focus was on enforcing boundaries around scientific ideas and practices. Later, it was used more forcefully to show nonscientists how science could be made relevant. More or less coincidentally, its invocation assuaged any doubts that real science was present.

Timing is a crucial factor in understanding the scientific method. Discussion of the best methodology with which to approach the study of nature goes back to the ancient Greeks. Method also appeared as an important concern for natural philosophers during the Islamic and European Middle Ages, whereas many historians have seen the methodological shifts associated with the Scientific Revolution as crucial to the creation of modern science. Given all that, it’s even more remarkable that “scientific method” was rarely used before the mid-nineteenth century among English speakers, and only grew to widespread public prominence from the late nineteenth to the early twentieth centuries, peaking somewhere between the 1920s and 1940s. In short, the scientific method is a relatively recent invention.

But it was not alone. Such now-familiar pieces of rhetoric as “science and religion,” “scientist,” and “pseudoscience” grew in prominence over the same period of time. In that sense, “scientific method” was part of what we might call a rhetorical package, a collection of important keywords that helped to make science comprehensible, to clarify its differences with other realms of thought, and to distinguish its devotees from other people. All of this paralleled a shift in popular notions of science from general systematized knowledge during the early 1800s to a special and unique sort of information by the early 1900s. These notions eclipsed habits of talk about the scientific method that opened the door to attestations of the authority of science in contrast with other human activities.

Such labor is the essence of what Thomas Gieryn (b. 1950) has called “boundary-work”— that is, exploiting variations and even apparent contradictions in potential definitions of science to enhance one’s own access to social and material resources while denying such benefits to others. During the late 1800s, the majority of public boundary-work around science was related to the raging debate over biological evolution and the emerging fault line between science and religion. Given that, we might expect the scientific method to have been a prominent weapon for the advocates of evolutionary ideas, such as John Tyndall (1820–1893) or Thomas Henry Huxley (1825–1895). But that wasn’t the case. The notion of a uniquely scientific methodology was still too new and lacked the rhetorical flexibility that made it useful. Instead, the loudest invocations of the scientific method were by those who hoped to limit the reach of science. An author in a magazine called Ladies’ Repository (1868) reflected that “every generation, as it accumulated fresh illustrations of the scientific method, is more and more embarrassed at how to piece them in with that far grander and nobler personal discipline of the soul which hears in every circumstance of life some new word of command from the living God.”

Pablo A. Marquet (pmarquet{at}bio.puc.cl) is affiliated with the Department of Ecology in the School of Biological Sciences, at the Pontifical Catholic University of Chile, in Santiago; the Institute of Ecology and Biodiversity, also in Santiago; the Santa Fe Institute, in Santa Fe, New Mexico; and the Instituto de Sistemas Complejos de Valparaíso, Chile. Andrew P. Allen is affiliated with the Department of Biological Sciences at Macquarie University, in Sydney, Australia. James H. Brown is affiliated with the Department of Biology at the University of New Mexico, in Albuquerque. Jennifer A. Dunne, Brian J. Enquist, and Geoffrey B. West are affiliated with the Santa Fe institute; JAD is also affiliated with the Pacific Ecoinformatics and Computational Ecology Lab, in Berkeley, California; and BJE is also affiliated with the Department of Ecology and Evolutionary Biology at the University of Arizona, in Tucson. James F. Gillooly is affiliated with the Department of Biology at the University of Florida, in Gainesville. Patricia A. Gowaty and Steve P. Hubbell are affiliated with the Department of Ecology and Evolutionary Biology and the Institute of the Environment and Sustainability, at the University of California, Los Angeles, and with the Smithsonian Tropical Research Institute, in Panama City, Panama. Jessica L. Green is affiliated with the Institute of Ecology and Evolutionary Biology at the University of Oregon, in Eugene. John Harte is affiliated with the Energy and Resources Group and with the Environmental Science, Policy, and Management Department at the University of California, Berkeley. James O’Dwyer is affiliated with the Department of Plant Biology at the University of Illinois at Urbana–Champaign. Jordan G. Okie is affiliated with the School of Earth and Space Exploration at Arizona State University, in Tempe. Annette Ostling is affiliated with the Department of Ecology and Evolutionary Biology at the University of Michigan, in Ann Arbor. Mark Ritchie is affiliated with the Department of Biology at Syracuse University, in Syracuse New York. David Storch is affiliated with the Center for Theoretical Study and with the Department of Ecology, in the Faculty of Science, at Charles University, in Prague, Czech Republic.

We argue for expanding the role of theory in ecology to accelerate scientific progress, enhance the ability to address environmental challenges, foster the development of synthesis and unification, and improve the design of experiments and large-scale environmental-monitoring programs. To achieve these goals, it is essential to foster the development of what we call efficient theories, which have several key attributes. Efficient theories are grounded in first principles, are usually expressed in the language of mathematics, make few assumptions and generate a large number of predictions per free parameter, are approximate, and entail predictions that provide well-understood standards for comparison with empirical data. We contend that the development and successive refinement of efficient theories provide a solid foundation for advancing environmental science in the era of big data.

Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES) as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.

[...] the dynamics of this ecosystem seems to follow a prescribed “program.”

What might this program be then? Theoretical ecologists may feel compelled to build a model; after all, the three-species ecosystem is just a trophic chain that can be described by three differential equations, one for each species. Wrong. The deterministic dynamics of the system depend on a multitude of microscopic factors. Algae, bacteria, and ciliates can swim, aggregate, and change their size and behavior. All of these factors contribute to shift the species abundance in a reproducible yet unpredictable manner. The upshot is that we know there is a program, but we don’t know its rules.

Researchers could try and find some set of rules by measuring all the microscopic processes that take place in the closed ecosystem. But that exercise would most likely result in a model for this specific experiment, rather than in the scientific synthesis of the ecological forces that drive the system to its deterministic behavior. Experience from other complex systems shows that, to find a useful scientific synthesis, we need to identify order parameters, or “collective variables,” that capture the macroscopic behavior that emerges from a myriad of microscopic details.

Finding those variables remains a challenging problem, and it is one of the main scientific frontiers in the physics of life. However, for physicists, as for biologists, it should be clear that despite the great advances in molecular systems biology, the most ambitious questions in biology remain at the highest levels of organization.

The widespread application of high-throughput sequencing in studying evolutionary processes and patterns of diversification has led to many important discoveries. However, the barriers to utilizing these technologies and interpreting the resulting data can be daunting for first-time users. We provide an overview and a brief primer of relevant methods (e.g., whole-genome sequencing, reduced-representation sequencing, sequence-capture methods, and RNA sequencing), as well as important steps in the analysis pipelines (e.g., loci clustering, variant calling, whole-genome and transcriptome assembly). We also review a number of applications in which researchers have used these technologies to address questions related to avian systems. We highlight how genomic tools are advancing research by discussing their contributions to 3 important facets of avian evolutionary history. We focus on (1) general inferences about biogeography and biogeographic history, (2) patterns of gene flow and isolation upon secondary contact and hybridization, and (3) quantifying levels of genomic divergence between closely related taxa. We find that in many cases, high-throughput sequencing data confirms previous work from traditional molecular markers, although there are examples in which genome-wide genetic markers provide a different biological interpretation. We also discuss how these new data allow researchers to address entirely novel questions, and conclude by outlining a number of intellectual and methodological challenges as the genomics era moves forward.

SUMMARY

The widespread application of methods of high-throughput sequencing to study evolutionary processes and patterns of diversification has led to many important discoveries. However, use of these technologies and the interpretation of the resulting data can be intimidating for those researchers without previous experience. This paper presents a summary and a brief introduction to relevant methods (eg, whole-genome sequencing, sequencing of reduced representation libraries, methods of capture sequences and RNA sequencing), and important steps in the analysis protocols (eg, grouping of loci, allelic variants allocation, assembly of complete genomes and transcriptomes). We also present examples of applications in which researchers have used these technologies to answer questions related to the evolution of birds. We highlight how genomic tools help the advancement of science to discuss their contributions in three major aspects of the evolutionary history of birds. We focus on 1) general inferences about biogeography and biogeographic history, 2) patterns of gene flow and genetic isolation after secondary contact and hybridization, and 3) quantifying the levels of genomic divergence between closely related taxa. We found that in many cases the data of high-throughput sequencing confirmed the results of previous studies with traditional molecular markers, although there are examples where sampling of markers at the genomic level provides a different biological interpretation. Finally, we discuss how these new data can address completely new questions and conclude by outlining a series of methodological and intellectual challenges for the future in the age of genomics.

sábado, outubro 24, 2015

The Feynman Lectures on Physics

Feynman • Leighton • Sands

Caltech and The Feynman Lectures Website are pleased to present this online edition of The Feynman Lectures on Physics. Now, anyone with internet access and a web browser can enjoy reading a high quality up-to-date copy of Feynman's legendary lectures.

However, we want to be clear that this edition is only free to read online, and this posting does not transfer any right to download all or any portion of The Feynman Lectures on Physics for any purpose.

This edition has been designed for ease of reading on devices of any size or shape; text, figures and equations can all be zoomed without degradation.