sexta-feira, junho 30, 2017

This paper presents a history of the changing meanings of the term “gene,” over more than a century, and a discussion of why this word, so crucial to genetics, needs redefinition today. In this account, the first two phases of 20th century genetics are designated the “classical” and the “neoclassical” periods, and the current molecular-genetic era the “modern period.” While the first two stages generated increasing clarity about the nature of the gene, the present period features complexity and confusion. Initially, the term “gene” was coined to denote an abstract “unit of inheritance,” to which no specific material attributes were assigned. As the classical and neoclassical periods unfolded, the term became more concrete, first as a dimensionless point on a chromosome, then as a linear segment within a chromosome, and finally as a linear segment in the DNA molecule that encodes a polypeptide chain. This last definition, from the early 1960s, remains the one employed today, but developments since the

1970s have undermined its generality. Indeed, they raise questions about both the utility of the concept of a basic “unit of inheritance” and the long implicit belief that genes are autonomous agents. Here, we review findings that have made the classic molecular definition obsolete and propose a new one based on contemporary knowledge.

It is often claimed that, as a result of scientific progress, we now know that the natural world displays no design.Although we have no interest in defending design hypotheses, we will argue that establishing claims to the effect that we know the denials of design hypotheses is more difficult than it seems.We do so by issuing two skeptical challenges to design-deniers. The first challenge draws inspiration from radical skepticism and shows how design claims are at least as compelling as radical skeptical scenarios in undermining knowledge claims, and in fact probably more so.The second challenge takes its cue from skeptical theism and shows how we are typically not in an epistemic position to rule out design.

Liquid water exists in two different forms, new research reveals. Here, an illustration of the water molecule in front of an X-ray pattern from high-density amorphous ice, created by creating high pressures and low temperatures.

The importance of a molecular-level understanding of the properties, structure, and dynamics of liquid water is recognized in many scientific fields. It has been debated whether the observed high- and low-density amorphous ice forms are related to two distinct liquid forms. Here, we study experimentally the structure and dynamics of high-density amorphous ice as it relaxes into the low-density form. The unique aspect of this work is the combination of two X-ray methods, where wide-angle X-ray scattering provides the evidence for the structure at the atomic level and X-ray photon-correlation spectroscopy provides insight about the motion at the nanoscale, respectively. The observed motion appears diffusive, indicating liquid-like dynamics during the relaxation from the high-to low-density form.

Abstract

Water exists in high- and low-density amorphous ice forms (HDA and LDA), which could correspond to the glassy states of high- (HDL) and low-density liquid (LDL) in the metastable part of the phase diagram. However, the nature of both the glass transition and the high-to-low-density transition are debated and new experimental evidence is needed. Here we combine wide-angle X-ray scattering (WAXS) with X-ray photon-correlation spectroscopy (XPCS) in the small-angle X-ray scattering (SAXS) geometry to probe both the structural and dynamical properties during the high-to-low-density transition in amorphous ice at 1 bar. By analyzing the structure factor and the radial distribution function, the coexistence of two structurally distinct domains is observed at T = 125 K. XPCS probes the dynamics in momentum space, which in the SAXS geometry reflects structural relaxation on the nanometer length scale. The dynamics of HDA are characterized by a slow component with a large time constant, arising from viscoelastic relaxation and stress release from nanometer-sized heterogeneities. Above 110 K a faster, strongly temperature-dependent component appears, with momentum transfer dependence pointing toward nanoscale diffusion. This dynamical component slows down after transition into the low-density form at 130 K, but remains diffusive. The diffusive character of both the high- and low-density forms is discussed among different interpretations and the results are most consistent with the hypothesis of a liquid–liquid transition in the ultraviscous regime.

quarta-feira, junho 28, 2017

It is argued that some of the recent claims for cosmology are grossly overblown. Cosmology rests on a very small database: it suffers from many fundamental difficulties as a science (if it is a science at all) whilst observations of distant phenomena are difficult to make and harder to interpret. It is suggested that cosmological inferences should be tentatively made and sceptically received.

terça-feira, junho 27, 2017

The Accidental Universe

Science’s crisis of faith

In the fifth century B.C., the philosopher Democritus proposed that all matter was made of tiny and indivisible atoms, which came in various sizes and textures—some hard and some soft, some smooth and some thorny. The atoms themselves were taken as givens. In the nineteenth century, scientists discovered that the chemical properties of atoms repeat periodically (and created the periodic table to reflect this fact), but the origins of such patterns remained mysterious. It wasn’t until the twentieth century that scientists learned that the properties of an atom are determined by the number and placement of its electrons, the subatomic particles that orbit its nucleus. And we now know that all atoms heavier than helium were created in the nuclear furnaces of stars.

The history of science can be viewed as the recasting of phenomena that were once thought to be accidents as phenomena that can be understood in terms of fundamental causes and principles. One can add to the list of the fully explained: the hue of the sky, the orbits of planets, the angle of the wake of a boat moving through a lake, the six-sided patterns of snowflakes, the weight of a flying bustard, the temperature of boiling water, the size of raindrops, the circular shape of the sun. All these phenomena and many more, once thought to have been fixed at the beginning of time or to be the result of random events thereafter, have been explained as necessary consequences of the fundamental laws of nature—laws discovered by human beings.

This long and appealing trend may be coming to an end. Dramatic developments in cosmological findings and thought have led some of the world’s premier physicists to propose that our universe is only one of an enormous number of universes with wildly varying properties, and that some of the most basic features of our particular universe are indeed mere accidents—a random throw of the cosmic dice. In which case, there is no hope of ever explaining our universe’s features in terms of fundamental causes and principles.

It is perhaps impossible to say how far apart the different universes may be, or whether they exist simultaneously in time. Some may have stars and galaxies like ours. Some may not. Some may be finite in size. Some may be infinite. Physicists call the totality of universes the “multiverse.” Alan Guth, a pioneer in cosmological thought, says that “the multiple-universe idea severely limits our hopes to understand the world from fundamental principles.” And the philosophical ethos of science is torn from its roots. As put to me recently by Nobel Prize–winning physicist Steven Weinberg, a man as careful in his words as in his mathematical calculations, “We now find ourselves at a historic fork in the road we travel to understand the laws of nature. If the multiverse idea is correct, the style of fundamental physics will be radically changed.”

The scientists most distressed by Weinberg’s “fork in the road” are theoretical physicists. Theoretical physics is the deepest and purest branch of science. It is the outpost of science closest to philosophy, and religion. Experimental scientists occupy themselves with observing and measuring the cosmos, finding out what stuff exists, no matter how strange that stuff may be. Theoretical physicists, on the other hand, are not satisfied with observing the universe. They want to know why. They want to explain all the properties of the universe in terms of a few fundamental principles and parameters. These fundamental principles, in turn, lead to the “laws of nature,” which govern the behavior of all matter and energy. An example of a fundamental principle in physics, first proposed by Galileo in 1632 and extended by Einstein in 1905, is the following: All observers traveling at constant velocity relative to one another should witness identical laws of nature. From this principle, Einstein derived his theory of special relativity. An example of a fundamental parameter is the mass of an electron, considered one of the two dozen or so “elementary” particles of nature. As far as physicists are concerned, the fewer the fundamental principles and parameters, the better. The underlying hope and belief of this enterprise has always been that these basic principles are so restrictive that only one, self-consistent universe is possible, like a crossword puzzle with only one solution. That one universe would be, of course, the universe we live in. Theoretical physicists are Platonists. Until the past few years, they agreed that the entire universe, the one universe, is generated from a few mathematical truths and principles of symmetry, perhaps throwing in a handful of parameters like the mass of the electron. It seemed that we were closing in on a vision of our universe in which everything could be calculated, predicted, and understood.

However, two theories in physics, eternal inflation and string theory, now suggest that the same fundamental principles from which the laws of nature derive may lead to many different self-consistent universes, with many different properties. It is as if you walked into a shoe store, had your feet measured, and found that a size 5 would fit you, a size 8 would also fit, and a size 12 would fit equally well. Such wishy-washy results make theoretical physicists extremely unhappy. Evidently, the fundamental laws of nature do not pin down a single and unique universe. According to the current thinking of many physicists, we are living in one of a vast number of universes. We are living in an accidental universe. We are living in a universe uncalculable by science.

This study replicates and extends the findings of previous research (Wright, H., & Jenks, R. A. (2016). Sex on the brain! Associations between sexual activity and cognitive function in older age. Age and Ageing, 45, 313–317. doi:10.1093/ageing/afv197) which found a significant association between sexual activity (SA) and cognitive function in older adults. Specifically, this study aimed to generalize these findings to a range of cognitive domains, and to assess whether increasing SA frequency is associated with increasing scores on a variety of cognitive tasks.

Methods:

Seventy-three participants aged 50–83 years took part in the study (38.4% male, 61.6% female). Participants completed the Addenbrooke’s Cognitive Examination-III (ACE-III) cognitive assessment and a questionnaire on SA frequency (never, monthly, or weekly), and general health and lifestyle.

Results:

Weekly SA was a significant predictor of total ACE-III, fluency, and visuospatial scores in regression models, including age, gender, education, and cardiovascular health.

Discussion:

Greater frequency of SA was associated with better overall ACE-III scores and scores on subtests of verbal fluency and visuospatial ability. Both of these tasks involve working memory and executive function, and links between sexual behavior, memory, and dopamine are discussed. The findings have implications for the maintenance of intimate relationships in later life.

For some decades now experts in several fields of the science of human nature, society and culture are using evolutionary models to explain their domain-specific phenomena. This led to the prominent idea, that the historical development of human culture in all or many of its facets should best be described as a Darwinian process that is not based on genes but still driven by the principles of variation, selection and reproduction. At the beginning of the 21st century, a generalized theory of evolution seems to appear as an interdisciplinary theoretical structure finding its place between likewise interdisciplinary frameworks such as system theory or action theory. Subdisciplines like evolutionary psychology, evolutionary game theory, evolutionary epistemology and the theory of a cultural evolution in general seem to provide a set of models and explanatory tools that ultimately can be seen as varieties of one and the same basic theoretical structure: a generalized theory of evolution.

The generalization of the theory of evolution had not only emphatic supporters, but was also exposed to severe critique. In any case, various interesting questions can be raised within the framework. Is a Darwinian theory of cultural evolution a proper candidate to synthesize the social sciences? What is the surplus value of evolutionary explanations? More specifically, e.g., can language, meaning and content be explained in terms of evolutionary signaling games of coordination? Which facets of biological evolutionary systems can be applied for cultural evolutionary systems and where do they differ in relevant aspects? For example, are there any, and if, what is the methodological and ontological status of replicators in the cultural realm?

The conference aims to gather answers to some of these frequently raised questions and explores recent attempts to move beyond mere qualitative theorizing in the domain of generalized evolutionary systems. By bringing together researchers with a common interest but with different backgrounds and toolboxes, we hope to inspire interdisciplinary discussions and new collaborations.

Keynote Speakers:

Daniel Dennett (Tufts University)

Eva Jablonka (Tel Aviv University)

Alex Mesoudi (University of Exeter)

Thomas Reydon (University of Hannover)

Gerhard Schurz (University of Duesseldorf)

Brian Skyrms (University of California, Irvine)

Call for papers:

We invite contributions devoted to all fields of The Generalized Theory of Evolution. Abstracts should be suitable for a 20min presentation (plus 10min discussion) and contain not more than 500 words, including some references to important work that will be addressed. They have to be in English and prepared for blind review. The title of the paper as well as the name, affiliation and e-mail address of the author must be included in a separate document. It should be clear from your abstract which authors your paper will address. Files have to be submitted via e-mail to: >.

The submission deadline is September 1, 2017. Authors will be notified by September 30, 2017.

Chromatin is a system of proteins, RNA, and DNA that interact with each other to organize and regulate genetic information within eukaryotic nuclei. Chromatin proteins carry out essential functions: packing DNA during cell division, partitioning DNA into sub-regions within the nucleus, and controlling levels of gene expression. There is a growing interest in manipulating chromatin dynamics for applications in medicine and agriculture. Progress in this area requires the identification of design rules for the chromatin system. Here, we focus on the relationship between the physical structure and function of chromatin proteins. We discuss key research that has elucidated the intrinsic properties of chromatin proteins and how this information informs design rules for synthetic systems. Recent work demonstrates that chromatin-derived peptide motifs are portable and in some cases can be customized to alter their function. Finally, we present a workflow for fusion protein design and discuss best practices for engineering chromatin to assist scientists in advancing the field of synthetic epigenetics.

Obstacles to inferring species trees from whole genome data sets range from algorithmic and data management challenges to the wholesale discordance in evolutionary history found in different parts of a genome. Recent work that builds trees directly from genomes by parsing them into sets of small

k-mer strings holds promise to streamline and simplify these efforts, but existing approaches do not account well for gene tree discordance. We describe a “seed and extend” protocol that finds nearly exact matching sets of orthologous k-mers and extends them to construct data sets that can properly account for genomic heterogeneity. Exploiting an efficient suffix array data structure, sets of whole genomes can be parsed and converted into phylogenetic data matrices rapidly, with contiguous blocks of k-mers from the same chromosome, gene, or scaffold concatenated as needed. Phylogenetic trees constructed from highly curated rice genome data and a diverse set of six other eukaryotic whole genome, transcriptome, and organellar genome data sets recovered trees nearly identical to published phylogenomic analyses, in a small fraction of the time, and requiring many fewer parameter choices. Our method’s ability to retain local homology information was demonstrated by using it to characterize gene tree discordance across the rice genome, and by its robustness to the high rate of interchromosomal gene transfer found in several rice species.

The feat made headlines around the world: “Scientists Say Human Genome is Complete,”the New York Times announcedin 2003. “The Human Genome,” the journals Science and Nature said in identical ta-dah cover lines unveiling the historic achievement.

There was one little problem.

“As a matter of truth in advertising, the ‘finished’ sequence isn’t finished,” said Eric Lander, who led the lab at the Whitehead Institute that deciphered more of the genome for the government-funded Human Genome Project than any other. “I always say ‘finished’ is a term of art.”

“It’s very fair to say the human genome was never fully sequenced,” Craig Venter, another genomics luminary, told STAT.

“The human genome has not been completely sequenced and neither has any other mammalian genome as far as I’m aware,” said Harvard Medical School bioengineer George Church, who made key early advances in sequencing technology.

What insiders know, however, is not well-understood by the rest of us, who take for granted that each A, T, C, and G that makes up the DNA of all 23 pairs of human chromosomes has been completely worked out. When scientists finished the first draft of the human genome, in 2001, and again when they had the final version in 2003, no one lied, exactly. FAQs from the National Institutes of Health refer to the sequence’s “essential completion,” and to the question, “Is the human genome completely sequenced?” they answer, “Yes,” with the caveat — that it’s “as complete as it can be” given available technology.

Perhaps nobody paid much attention because the missing sequences didn’t seem to matter. But now it appears they may play a role in conditions such as cancer and autism.

“A lot of people in the 1980s and 1990s [when the Human Genome Project was getting started] thought of these regions as nonfunctional,” said Karen Miga, a molecular biologist at the University of California, Santa Cruz. “But that’s no longer the case.” Some of them, called satellite regions, misbehave in some forms of cancer, she said, “so something is going on in these regions that’s important.”

Miga regards them as the explorer Livingstone did Africa — terra incognita whose inaccessibility seems like a personal affront. Sequencing the unsequenced, she said, “is the last frontier for human genetics and genomics.”

Church, too, has been making that point, mentioning it at both the May meeting of an effort to synthesize genomes, and at last weekend’s meeting of the International Society for Stem Cell Research. Most of the unsequenced regions, he said, “have some connection to aging and aneuploidy” (an abnormal number of chromosomes such as what occurs in Down syndrome). Church estimates 4 percent to 9 percent of the human genome hasn’t been sequenced. Miga thinks it’s 8 percent.

The reason for these gaps is that DNA sequencing machines don’t read genomes like humans read books, from the first word to the last. Instead, they first randomly chop up copies of the 23 pairs of chromosomes, which total some 3 billion “letters,” so the machines aren’t overwhelmed. The resulting chunks contain from 1,000 letters (during the Human Genome Project) to a few hundred (in today’s more advanced sequencing machines). The chunks overlap. Computers match up the overlaps, assembling the chunks into the correct sequence.

• Leading- and lagging-strand polymerases function autonomously within a replisome

• Replication is kinetically discontinuous and punctuated by pauses and rate-switches

• The helicase slows in a self-regulating fail-safe mechanism when synthesis pauses

• Priming is scaled to a 5-fold reduced processivity of the lagging-strand polymerase

Summary

It has been assumed that DNA synthesis by the leading- and lagging-strand polymerases in the replisome must be coordinated to avoid the formation of significant gaps in the nascent strands. Using real-time single-molecule analysis, we establish that leading- and lagging-strand DNA polymerases function independently within a single replisome. Although average rates of DNA synthesis on leading and lagging strands are similar, individual trajectories of both DNA polymerases display stochastically switchable rates of synthesis interspersed with distinct pauses. DNA unwinding by the replicative helicase may continue during such pauses, but a self-governing mechanism, where helicase speed is reduced by ∼80%, permits recoupling of polymerase to helicase. These features imply a more dynamic, kinetically discontinuous replication process, wherein contacts within the replisome are continually broken and reformed. We conclude that the stochastic behavior of replisome components ensures complete DNA duplication without requiring coordination of leading- and lagging-strand synthesis.