segunda-feira, agosto 31, 2015

1 Institute for Physical Science and Technology, University of Maryland, College Park, Maryland 20742, USA

2 Institute for Research in Electronics and Applied Physics, University of Maryland, College Park, Maryland 20742, USA

Chaos 25, 097618 (2015);

Abstract

In this paper, we propose, discuss, and illustrate a computationally feasible definition of chaos which can be applied very generally to situations that are commonly encountered, including attractors, repellers, and non-periodically forced systems. This definition is based on an entropy-like quantity, which we call “expansion entropy,” and we define chaos as occurring when this quantity is positive. We relate and compare expansion entropy to the well-known concept of topological entropy to which it is equivalent under appropriate conditions. We also present example illustrations, discuss computational implementations, and point out issues arising from attempts at giving definitions of chaos that are not entropy-based.

Key Topics

EntropyChaosAttractorsManifoldsSingular values

GO TO SECTION...

Toward the end of the 19th century, Poincaré demonstrated the occurrence of extremely complicated orbits in the Newtonian dynamics of three gravitationally attracting bodies. This complexity is now called chaos and has received a vast amount of attention since Poincaré's early discovery. In spite of this abundant past and current work, there is still no broadly applicable, convenient, generally accepted definition of the term chaos. In this paper, we advocate a particular entropy-based definition that appears to be very simple, while, at the same time, is readily accessible to numerical computation, and can be very generally applied to a variety of often-encountered situations, including attractors, repellers, and non-periodically forced systems. We also review and compare various previous definitions of chaos.

sexta-feira, agosto 28, 2015

The Dawn of Life in a $5 Toaster Oven

How a homemade piece of lab equipment is recreating chemical evolution on early Earth.

BY JOHNNY BONTEMPS

ILLUSTRATION BY JACKIE FERRENTINO

AUGUST 27, 2015

God might just as well have begun with a toaster oven. A few years ago at a yard sale, Nicholas Hud spotted a good candidate: A vintage General Electric model, chrome-plated with wood-grain panels, nestled in an old yellowed box, practically unused. The perfect appliance for cooking up the chemical precursors of life, he thought. He bought it for $5.

At home in his basement, with the help of his college-age son, he cut a rectangular hole in the oven’s backside, through which an automated sliding table (recycled from an old document scanner) could move a tray of experiments in and out. He then attached a syringe pump to some inkjet printer parts, and rigged the system to periodically drip water onto the tray.

Today the contraption sits atop a workbench in Hud’s laboratory at the Georgia Institute of Technology, where he directs the Center for Chemical Evolution, a multi-university consortium funded by NASA and the National Science Foundation. For the past two decades, he has been hunting for the chemical recipes that could explain how life arose on Earth. When scientists began investigating life’s molecular origin in the 1950s, they assumed that the first biological molecules formed spontaneously from a soup of primordial compounds: a lucky marriage of the right ingredients, under the right conditions, at the right time. Hud and his colleagues are now finding that the spark of life may have struck much more gradually, not by chance but via a long chemical evolution.

The toaster is his latest proving ground. It simulates the cycles of cool and hot, and wet and dry, that Hud suspects jump-started this evolutionary process, millions of years before the first cellular life forms emerged. It mimics dew condensing at night and evaporating with the sunrise; rain puddles filling up and drying out; coastal lagoons flooding and emptying with the tides. Hud calls it the “day-night machine.”

On a spring day last year, he and I are huddled around the homebuilt device, watching it work. Outside of the oven, the syringe delivers a few droplets of water into each of six wells in a ceramic plate on the sliding tray. For the purpose of this demo, the wells are empty; during experiments, they contain a mixture of simple molecules, or monomers, like those believed to have been present on early Earth. The tray disappears into the oven, sealing it shut. As the temperature rises to 185 degrees Fahrenheit (85 degrees Celsius), the water evaporates—the first day. A few minutes pass, and the tray slides out. The wells cool, water drips, and in goes the tray—the second day.

Natural environments are never truly constant, but the evolutionary implications of temporally varying selection pressures remain poorly understood. Here we investigate how the fate of a new mutation in a variable environment depends on the dynamics of environmental fluctuations and on the selective pressures in each condition. We find that even when a mutation experiences many environmental epochs before fixing or going extinct, its fate is not necessarily determined by its time-averaged selective effect. Instead, environmental variability reduces the efficiency of selection across a broad parameter regime, rendering selection unable to distinguish between mutations that are substantially beneficial and substantially deleterious on average. Temporal fluctuations can also dramatically increase fixation probabilities, often making the details of these fluctuations more important than the average selection pressures acting on each new mutation. For example, mutations that result in a tradeoff between conditions but are strongly deleterious on average can nevertheless be more likely to fix than mutations that are always neutral or beneficial. These effects can have important implications for patterns of molecular evolution in variable environments, and they suggest that it may often be difficult for populations to maintain specialist traits, even when their loss leads to a decline in time-averaged fitness.

Funding: This study was a result of funding from the United States National Science Foundation (http://www.nsf.gov); the relevant grant numbers are DEB#-1301820 (KAC), 1208428 (DES), 1208719 (DSH), 1208741 (LAK). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Given that reproducibility is a pillar of scientific research, the preservation of scientific knowledge (underlying data) is of paramount importance. The standard of reproducibility can be evaluated based on criteria of methodological rigor and legitimacy, which is sometimes used to distinguish “hard” from “soft” sciences.In phylogenetics, a discipline that routinely uses DNA sequences to build trees reflecting organismal relationships, the scale of data collection and the complexity of analytical software have both increased dramatically during the past decade. Consequently, the ability to navigate publications and reproduce analyses is more challenging than ever. When DNA sequencing was initially employed in systematics during the late 1980s, there was some reluctance to deposit nucleotide sequences in open repositories such as GenBank [1]. This ultimately changed when high-impact journals (e.g., Proceedings of the National Academy of Sciences, Nature, Science) began requiring GenBank submission as a prerequisite for publication [1],[2]; now virtually every evolutionary biology journal observes this requirement (but see [3]).

Until recently, uploading sequences to GenBank (or EMBL) was generally considered sufficient to ensure reproducibility of phylogenetic studies using DNA sequence data. Increasingly, however, the systematics community is realizing that archiving raw DNA sequences is not adequate, and that the underlying alignments of DNA sequences as well as the resulting phylogenetic trees are pivotal for reproducibility, comparative purposes, meta-analyses, and ultimately synthesis. Indeed, there has been a growing clamor for journals to adopt and enforce more rigorous data archiving practices across diverse disciplines [4]–[8]. As a result, about 35 evolutionary journals [5],[9] have adopted policies to encourage or require authors to upload alignments, phylogenetic trees, and other files requisite for study reproducibility [5] to TreeBASE (http://treebase.org/) and/or other public repositories such as Dryad (http://datadryad.org). Unfortunately, enforcement of such data deposition policies is generally lax, and most journals in systematics and evolution still do not require DNA sequence alignment or tree deposition.As a result, the alignments and trees underlying most published papers in systematics/phylogenetics and evolutionary biology remain inaccessible to the scientific community at large[8],[10].

Scope of the Problem

As DNA sequencing has become easier, faster, and cheaper, and as scientists have come to realize that phylogenies inform diverse areas of inquiry, phylogenetic trees have permeated virtually every facet of biology, including disparate subdisciplines such as medicine (e.g., [11],[12]), climate change research (e.g., [13],[14]), organismal evolution (e.g., [15]), conservation efforts (e.g., [16]), and linguistics (e.g., [17]). In building phylogenetic trees, researchers implicitly acknowledge that alignments and trees are important. However, archiving these data has been largely ignored, perhaps because researchers have considered the actual raw sequence data as the sole information necessary to replicate a phylogenetic study, while alignments and phylogenetic trees have been treated as the resulting outcome from sequence data analyses. The latter view of alignments and trees is certainly correct, but the underlying sequence alignments and associated trees should also be recognized as crucial data in their own right. The increasing use of published trees and the underlying sequence alignments as the framework for evolutionary inference and other subsequent downstream hypothesis testing dictates, however, that alignments and trees are data and need to be archived with a diligence on par with raw sequence data.

The call for ensuring reproducibility and data sharing in systematics is not new. The fundamental importance of archiving scientific datasets across numerous subdisciplines including climate change research, evolutionary biology, and medicine has received increasing attention over the past five years [5]–[8],[10],[18]–[22]. Several of these studies have examined the proportion of publications that archived data in a manner that affords public access [6],[8],[18], and all concluded that we have entered an age in which scientific journals should require and enforce data archiving policies.

Some researchers, including [23] for psychology and [4] for medical research, have taken the next step and have contacted authors directly when data of interest have not been available, which highlighted an additional problem. These workers found that data are not easily obtained via direct author contact. More recently, Stoltzfus et al. [8] examined deposition practices within the molecular systematic community, and estimated alignment/tree deposition rates to be remarkably low (~4%). Stolzfus et al. [8] focused on only two journals (American Journal of Botany and Evolution), and searched literature over just a 2-year period (2010–2011). Although the study of Stolzfus et al. [8] represents a good first step, no analysis has attempted to evaluate how often alignments/trees are deposited over a broad range of evolutionary biology journals that span organismal diversity representing the tree of life, or how archiving tendencies have changed over time.

In the process of gathering data to build the first tree of life for all ~1.9 million named species (the Open Tree of Life Project; http://opentreeoflife.org), we examined 7,539 peer-reviewed papers to evaluate data depositional practices of foundational DNA sequence alignments and phylogenetic trees by the systematic community between 2000 and 2012. Our broad survey of the literature covered animals, fungi, seed plants, microbial eukaryotes, archaea, and bacteria, and included publications from more than 100 journals (see Tables S1, S2, S3, S4). To assess the rigor of data that were deposited in a public archive, we also examined the quality (e.g., Did deposited trees match publication figure(s)? Were there branch lengths in deposited trees?) of ca. 350 files deposited in TreeBASE (described in Text S1). Additionally, we attempted to acquire data by randomly contacting 375 authors directly (see Text S1 and Table S4). Furthermore, to evaluate depositional practices of other data critical for study replication, we surveyed 100 randomly selected publications that implemented the popular evolutionary analysis package BEAST (Bayesian Evolutionary Analysis Sampling Trees [24]; 4,153 citations as of 7-17-2013), which is widely used to obtain divergence times and phylogenies that are used to test hypotheses and draw conclusions regarding broad biological questions (e.g., phylogeography, lineage origins).

Surprisingly, only 16.7%, 1,262 from a total of 7,539 publications surveyed, provided accessible alignments/trees (Figures 1 and 2). Our attempts to obtain datasets directly from authors were only 16% successful (61/375; see Table S4), and we estimate that approximately 70% of existing alignments/trees are no longer accessible. Thus, we conclude that most of the underlying sequence alignments and phylogenetic trees produced by the systematic community during the past several decades are essentially lost, accessible only as static figures in a published journal article with no capacity for subsequent manipulation. Furthermore, when data are deposited, they are often incomplete (e.g., what characters were excluded, accepted taxon names; see Text S1 and Figure S1). Our survey of publications that implemented BEAST revealed that only 11 out of 100 (11%) examined studies provided access to the underlying xml input file, which is critical for reproducing BEAST results. Although funding agencies often require all data to be accessible from funded publications, our results reveal this is more the exception than the rule.

This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ) and either DOI or URL of the article must be cited.

Despite being known for nearly two centuries, new specimens of the derived non-pterodactyloid pterosaur Rhamphorhynchus continue to be discovered and reveal new information about their anatomy and palaeobiology. Here we describe a specimen held in the collections of the Royal Tyrrell Museum of Palaeontology, Alberta, Canada that shows both preservation and impressions of soft tissues, and also preserves material interpreted as stomach contents of vertebrate remains and, uniquely, a putative coprolite. The specimen also preserves additional evidence for fibers in the uropatagium.

’As a working scientist I’ve learnt that peer review is very important to make science credible The authority science can claim comes from evidence and experiment and an attitude of mind that seeks to test its theories to destruction…Scepticism is very important…be the worst enemy of your own idea, always challenge it, always test it.

I think things are a little different when you have a denialist or an extreme sceptic. They are convinced that they know what’s going on and they only look for data which supports that position and they’re not really engaging in the scientific process. There is a fine line between healthy scepticism which is a fundamental part of the scientific process and denial which can stop the science moving on. But the difference is crucial.’

We conducted shock experiments simulating comet impacts to assess the feasibility of peptide synthesis by such a process. We used frozen mixture of the amino acid glycine, water ice, and silicate (forsterite) as the starting material and applied impact shocks ranging from 4.8 to 26.3 GPa using a vertical propellant gun under cryogenic conditions (77 K). The results show that amino acid oligomerization up to trimers can be achieved. Further, linear peptides (dipeptide and tripeptide forms), which are important materials for the further elongation of peptide chains, were obtained in yields of one or two magnitudes greater than that of cyclic peptide form (diketopiperazine). These results contrast with those by Blank et al. (2001) for shock experiments of amino acid solutions at room temperature, which showed the synthesis of a comparable amount of diketopiperazines to that of the linear peptides. Thus, the existence of cryogenic conditions at the point of impact shock may be critical for the formation of linear peptides. Our results demonstrate that comet impacts could have supplied a significant amount of linear peptides on the early Earth and other extraterrestrial bodies.

University of California, Davis, United States; University of California, San Diego, United States; University of Colorado School of Medicine, United States

Source/Fonte: Jawdat Al-Bassam, UC Davis

Abstract

Microtubule dynamics and polarity stem from the polymerization of αβ-tubulin heterodimers. Five conserved tubulin cofactors/chaperones and the Arl2 GTPase regulate α- and β-tubulin assembly into heterodimers and maintain the soluble tubulin pool in the cytoplasm, but their physical mechanisms are unknown. Here, we reconstitute a core tubulin chaperone consisting of tubulin cofactors TBCD, TBCE, and Arl2, and reveal a cage-like structure for regulating αβ-tubulin. Biochemical assays and electron microscopy structures of multiple intermediates show the sequential binding of αβ-tubulin dimer followed by tubulin cofactor TBCC onto this chaperone, forming a ternary complex in which Arl2 GTP hydrolysis is activated to alter αβ-tubulin conformation. A GTP-state locked Arl2 mutant inhibits ternary complex dissociation in vitro and causes severe defects in microtubule dynamics in vivo. Our studies suggest a revised paradigm for tubulin cofactors and Arl2 functions as a catalytic chaperone that regulates soluble αβ-tubulin assembly and maintenance to support microtubule dynamics.

eLIFE Digest

Cells contain a network of protein filaments called microtubules. These filaments are involved in many biological processes; for example, they help cells keep the right shape, and they help to transport proteins and other materials inside cells.

Two proteins called α-tubulin and β-tubulin are the building blocks of microtubules. The filaments are very dynamic structures that can rapidly change length as individual tubulin units are either added or removed to the filament ends. Several proteins known as tubulin cofactors and an enzyme called Arl2 help to build a vast pool of tubulin units that are able attach to the microtubules. These units—called αβ-tubulin—are formed by α-tubulin and β-tubulin binding to each other, but it not clear exactly what roles the tubulin cofactors and Arl2 play in this process.

Nithianantham et al. used a combination of microscopy and biochemical techniques to study how the tubulin cofactors and Arl2 are organised, and their role in the assembly of microtubules in yeast. The experiments show that Arl2 and two tubulin cofactors associate with each other to form a stable ‘complex’ that has a cage-like structure. A molecule of αβ-tubulin binds to the complex, followed by another cofactor called TBCC. This activates the enzyme activity of Arl2, which releases the energy needed to alter the shape of the αβ-tubulin. Nithianantham et al. also found that yeast cells with a mutant form of Arl2 that lacked enzyme activity had problems forming microtubules.

Together, these findings show that the tubulin cofactors and Arl2 form a complex that regulates the assembly and maintenance of αβ-tubulin. The next challenge is to understand how this regulation influences the way that microtubules grow and shrink inside cells.

The Dynamics of Incomplete Lineage Sorting across the Ancient Adaptive Radiation of Neoavian Birds

Alexander Suh , Linnéa Smeds, Hans Ellegren

Published: August 18, 2015DOI: 10.1371/journal.pbio.1002224

Abstract

The diversification of neoavian birds is one of the most rapid adaptive radiations of extant organisms. Recent whole-genome sequence analyses have much improved the resolution of the neoavian radiation and suggest concurrence with the Cretaceous-Paleogene (K-Pg) boundary, yet the causes of the remaining genome-level irresolvabilities appear unclear. Here we show that genome-level analyses of 2,118 retrotransposon presence/absence markers converge at a largely consistent Neoaves phylogeny and detect a highly differential temporal prevalence of incomplete lineage sorting (ILS), i.e., the persistence of ancestral genetic variation as polymorphisms during speciation events. We found that ILS-derived incongruences are spread over the genome and involve 35% and 34% of the analyzed loci on the autosomes and the Z chromosome, respectively. Surprisingly, Neoaves diversification comprises three adaptive radiations, an initial near-K-Pg super-radiation with highly discordant phylogenetic signals from near-simultaneous speciation events, followed by two post-K-Pg radiations of core landbirds and core waterbirds with much less pronounced ILS. We provide evidence that, given the extreme level of up to 100% ILS per branch in super-radiations, particularly rapid speciation events may neither resemble a fully bifurcating tree nor are they resolvable as such. As a consequence, their complex demographic history is more accurately represented as local networks within a species tree.

Author Summary

The rise of modern birds began after the mass extinction of nonavian dinosaurs and archaic birds at the Cretaceous-Paleogene (K-Pg) boundary, about 66 million years ago. This coincides with the super-rapid adaptive radiation of Neoaves (a group that contains most modern birds), which has been difficult to resolve even with whole genome sequences. We reconstructed the genealogical fates of thousands of rare genomic changes (insertions of selfish mobile elements called retrotransposons), a third of which were found to be affected by a phenomenon known as incomplete lineage sorting (ILS), namely a persistence of polymorphisms across multiple successive speciation events. Astoundingly, we found that near the K-Pg boundary, speciation events were accompanied by extreme levels of ILS, suggesting a near-simultaneous, star-like diversification process that appears plausible in the context of instantaneous niche availability that must have followed the K-Pg mass extinction. Our genome-scale results provide a population genomic explanation as to why some species radiations may be more complex than a fully bifurcating tree of life. We suggest that, under such circumstances, ILS bears witness to the biological limitation of phylogenetic resolution.

sábado, agosto 15, 2015

Content Volatility of Scientific Topics in Wikipedia: A Cautionary Tale

Adam M. Wilson , Gene E. Likens

Published: August 14, 2015DOI: 10.1371/journal.pone.0134454

Abstract

Wikipedia has quickly become one of the most frequently accessed encyclopedic references, despite the ease with which content can be changed and the potential for ‘edit wars’ surrounding controversial topics. Little is known about how this potential for controversy affects the accuracy and stability of information on scientific topics, especially those with associated political controversy. Here we present an analysis of the Wikipedia edit histories for seven scientific articles and show that topics we consider politically but not scientifically “controversial” (such as evolution and global warming) experience more frequent edits with more words changed per day than pages we consider “noncontroversial” (such as the standard model in physics or heliocentrism). For example, over the period we analyzed, the global warming page was edited on average (geometric mean ±SD) 1.9±2.7 times resulting in 110.9±10.3 words changed per day, while the standard model in physics was only edited 0.2±1.4 times resulting in 9.4±5.0 words changed per day. The high rate of change observed in these pages makes it difficult for experts to monitor accuracy and contribute time-consuming corrections, to the possible detriment of scientific accuracy. As our society turns to Wikipedia as a primary source of scientific information, it is vital we read it critically and with the understanding that the content is dynamic and vulnerable to vandalism and other shenanigans.

Recombinationis a central process to stably maintain and transmit a genome through somatic cell divisions and to new generations.Hence, recombination needs to be coordinated with other events occurring on the DNA template, such as DNA replication, transcription, and the specialized chromosomal functions at centromeres and telomeres. Moreover, regulation with respect to the cell-cycle stage is required as much as spatiotemporal coordination within the nuclear volume. These regulatory mechanisms impinge on the DNA substrate through modifications of the chromatin and directly on recombination proteins through a myriad of posttranslational modifications (PTMs) and additional mechanisms. Although recombination is primarily appreciated to maintain genomic stability, the process also contributes to gross chromosomal arrangements and copy-number changes.Hence, the recombination process itself requires quality control to ensure high fidelity and avoid genomic instability. Evidently, recombination and its regulatory processes have significant impact on human disease, specifically cancer and, possibly, neurodegenerative diseases.

Received 9 April 2015, Revised 16 July 2015, Accepted 19 July 2015, Available online 26 July 2015

Abstract

Accretion occurs pervasively in nature at widely different timeframes. The process also manifests in the evolution of macromolecules. Here we review recent computational and structural biology studies of evolutionary accretion that make use of the ideographic (historical, retrodictive) and nomothetic (universal, predictive) scientific frameworks. Computational studies uncover explicit timelines of accretion of structural parts in molecular repertoires and molecules. Phylogenetic trees of protein structural domains and proteomes and their molecular functions were built from a genomic census of millions of encoded proteins and associated terminal Gene Ontology terms. Trees reveal a ‘metabolic-first’ origin of proteins, the late development of translation, and a patchwork distribution of proteins in biological networks mediated by molecular recruitment. Similarly, the natural history of ancient RNA molecules inferred from trees of molecular substructures built from a census of molecular features shows patchwork-like accretion patterns. Ideographic analyses of ribosomal history uncover the early appearance of structures supporting mRNA decoding and tRNA translocation, the coevolution of ribosomal proteins and RNA, and a first evolutionary transition that brings ribosomal subunits together into a processive protein biosynthetic complex. Nomothetic structural biology studies of tertiary interactions and ancient insertions in rRNA complement these findings, once concentric layering assumptions are removed. Patterns of coaxial helical stacking reveal a frustrated dynamics of outward and inward ribosomal growth possibly mediated by structural grafting. The early rise of the ribosomal ‘turnstile’ suggests an evolutionary transition in natural biological computation. Results make explicit the need to understand processes of molecular growth and information transfer of macromolecules.

quinta-feira, agosto 13, 2015

Extinction Events Can Accelerate Evolution

Joel Lehman , Risto Miikkulainen

Published: August 12, 2015DOI: 10.1371/journal.pone.0132886

Source/Fonte: Joel Lehman

Abstract

Extinction events impact the trajectory of biological evolution significantly. They are often viewed as upheavals to the evolutionary process. In contrast, this paper supports the hypothesis that although they are unpredictably destructive, extinction events may in the long term accelerate evolution by increasing evolvability. In particular, if extinction events extinguish indiscriminately many ways of life, indirectly they may select for the ability to expand rapidly through vacated niches. Lineages with such an ability are more likely to persist through multiple extinctions. Lending computational support for this hypothesis, this paper shows how increased evolvability will result from simulated extinction events in two computational models of evolved behavior. The conclusion is that although they are destructive in the short term, extinction events may make evolution more prolific in the long term.

“By contrast, the EES regards the genome as a sub-system of the cell designed by evolution to sense and respond to the signals that impinge on it. Organisms are not built from genetic ‘instructions’ alone, but rather self-assemble using a broad variety of inter-dependent resources.” p. 6

terça-feira, agosto 11, 2015

Here we argue that the notion of falsifiability, a key concept in defining a valid scientific theory, can be quantified using Bayesian Model Selection, which is a standard tool in modern statistics. This relates falsifiability to the quantitative version of the Occam’s razor, and allows transforming some longrunning arguments about validity of certain scientific theories from philosophical discussions to mathematical calculations.