sexta-feira, abril 27, 2018

Published on April 13, 2018

The Scientific Importance of Free Speech

written by Adam Perkins

Editor’s note: this is a shortened version of a speech that the author was due to give last month at King’s College London which was canceled because the university deemed the event to be too ‘high risk’.

A quick Google search suggests that free speech is a regarded as an important virtue for a functional, enlightened society. For example, according to George Orwell: “If liberty means anything at all, it means the right to tell people what they do not want to hear.” Likewise, Ayaan Hirsi Ali remarked: “Free speech is the bedrock of liberty and a free society, and yes, it includes the right to blaspheme and offend.” In a similar vein, Bill Hicks declared: “Freedom of speech means you support the right of people to say exactly those ideas which you do not agree with”.

But why do we specifically need free speech in science? Surely we just take measurements and publish our data? No chit chat required. We need free speech in science because science is not really about microscopes, or pipettes, or test tubes, or even Large Hadron Colliders. These are merely tools that help us to accomplish a far greater mission, which is to choose between rival narratives, in the vicious, no-holds-barred battle of ideas that we call “science”.

For example, stomach problems such as gastritis and ulcers were historically viewed as the products of stress. This opinion was challenged in the late 1970s by the Australian doctors Robin Warren and Barry Marshall, who suspected that stomach problems were caused by infection with the bacteria Helicobacter pylori. Frustrated by skepticism from the medical establishment and by difficulties publishing his academic papers, in 1984, Barry Marshall appointed himself his own experimental subject and drank a Petri dish full of H. pylori culture. He promptly developed gastritis which was then cured with antibiotics, suggesting that H. pylori has a causal role in this type of illness. You would have thought that given this clear-cut evidence supporting Warren and Marshall’s opinion, their opponents would immediately concede defeat. But scientists are only human and opposition to Warren and Marshall persisted. In the end it was two decades before their crucial work on H. pylori gained the recognition it deserved, with the award of the 2005 Nobel Prize in Physiology or Medicine.

From this episode we can see that even in situations where laboratory experiments can provide clear evidence in favour of a particular scientific opinion, opponents will typically refuse to accept it. Instead scientists tend cling so stubbornly to their pet theories that no amount of evidence will change their minds and only death can bring an end to the argument, as famously observed by Max Planck:

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

It is a salutary lesson that even in a society that permits free speech, Warren and Marshall had difficulty publishing their results. If their opponents had the legal power to silence them their breakthrough would have taken even longer to have become clinically accepted and even more people would have suffered unnecessarily with gastric illness that could have been cured quickly and easily with a course of antibiotics. But scientific domains in which a single experiment can provide a definitive answer are rare. For example, Charles Darwin’s principle of evolution by natural selection concerns slow, large-scale processes that are unsuited to testing in a laboratory. In these cases, we take a bird’s eye view of the facts of the matter and attempt to form an opinion about what they mean.

This allows a lot of room for argument, but as long as both sides are able to speak up, we can at least have a debate: when a researcher disagrees with the findings of an opponent’s study, they traditionally write an open letter to the journal editor critiquing the paper in question and setting out their counter-evidence. Their opponent then writes a rebuttal, with both letters being published in the journal with names attached so that the public can weigh up the opinions of the two parties and decide for themselves whose stance they favour. I recently took part in just such an exchange of letters in the elite journal Trends in Cognitive Sciences. The tone is fierce and neither side changed their opinions, but at least there is a debate that the public can observe and evaluate.

The existence of scientific debate is also crucial because as the Nobel Prize-winning physicist Richard Feynman remarked in 1963: “There is no authority who decides what is a good idea.” The absence of an authority who decides what is a good idea is a key point because it illustrates that science is a messy business and there is no absolute truth. This was articulated in Tom Schofield’s posthumously published essay in which he wrote:

[S]cience is not about finding the truth at all, but about finding better ways of being wrong. The best scientific theory is not the one that reveals the truth — that is impossible. It is the one that explains what we already know about the world in the simplest way possible, and that makes useful predictions about the future. When I accepted that I would always be wrong, and that my favourite theories are inevitably destined to be replaced by other, better, theories — that is when I really knew that I wanted to be a scientist.

When one side of a scientific debate is allowed to silence the other side, this is an impediment to scientific progress because it prevents bad theories being replaced by better theories. Or, even worse, it causes civilization to go backward, such as when a good theory is replaced by a bad theory that it previously displaced. The latter situation is what happened in the most famous illustration of the dire consequences that can occur when one side of a scientific debate is silenced. This occurred in connection with the theory that acquired characteristics are inherited. This idea had been out of fashion for decades, in part due to research in the 1880s by August Weismann. He conducted an experiment that entailed amputating the tails of 68 white mice, over 5 generations. He found that no mice were born without a tail or even with a shorter tail. He stated: “901 young were produced by five generations of artificially mutilated parents, and yet there was not a single example of a rudimentary tail or of any other abnormality in this organ.”

These findings and others like them led to the widespread acceptance of Mendelian genetics. Unfortunately for the people of the USSR, Mendelian genetics are incompatible with socialist ideology and so in the 1930s USSR were replaced with Trofim Lysenko’s socialism-friendly idea that acquired characteristics are inherited. Scientists who disagreed were imprisoned or executed. Soviet agriculture collapsed and millions starved.

Henceforth the tendency to silence scientists with inconvenient opinions has been labeled Lysenkoism since it provides the most famous example of the harm that can be done when competing scientific opinions cannot be expressed equally freely. Left-wingers tend to be the most prominent Lysenkoists but the suppression of scientific opinions can occur in other contexts too. The Space Shuttle Challenger disaster in 1986 is a famous example.

Glycine (Gly), the simplest amino-acid building-block of proteins, has been identified on icy dust grains in the interstellar medium, icy comets, and ice covered meteorites. These astrophysical ices contain simple molecules (e.g., CO2, H2O, CH4, HCN, and NH3) and are exposed to complex radiation fields, e.g., UV, γ, or X-rays, stellar/solar wind particles, or cosmic rays. While much current effort is focused on understanding the radiochemistry induced in these ices by high energy radiation, the effects of the abundant secondary low energy electrons (LEEs) it produces have been mostly assumed rather than studied. Here we present the results for the exposure of multilayer CO2:CH4:NH3 ice mixtures to 0-70 eV electrons under simulated astrophysical conditions. Mass selected temperature programmed desorption (TPD) of our electron irradiated films reveals multiple products, most notably intactglycine, which is supported by control measurements of both irradiated or un-irradiated binary mixture films, and un-irradiated CO2:CH4:NH3ices spiked with Gly. The threshold of Gly formation by LEEs is near 9 eV, while the TPD analysis of Gly film growth allows us to determine the “quantum” yield for 70 eV electrons to be about 0.004 Gly per incident electron. Our results show that simple amino acids can be formed directly from simple molecular ingredients, none of which possess preformed C—C or C—N bonds, by the copious secondary LEEs that are generated by ionizing radiation in astrophysical ices.

We study the dynamics of a supersonically expanding, ring-shaped Bose-Einstein condensate both experimentally and theoretically. The expansion redshifts long-wavelength excitations, as in an expanding universe. After expansion, energy in the radial mode leads to the production of bulk topological excitations—solitons and vortices—driving the production of a large number of azimuthal phonons and, at late times, causing stochastic persistent currents. These complex nonlinear dynamics, fueled by the energy stored coherently in one mode, are reminiscent of a type of “preheating” that may have taken place at the end of inflation.

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.

Recent developments in synthetic molecular motors and pumps have sprung from a remarkable confluence of experiment and theory. Synthetic accomplishments have facilitated the ability to design and create molecules, many of them featuring mechanically bonded components, to carry out specific functions in their environment—walking along a polymeric track, unidirectional circling of one ring about another, synthesizing stereoisomers according to an external protocol, or pumping rings onto a long rod-like molecule to form and maintain high-energy, complex, nonequilibrium structures from simpler antecedents. Progress in the theory of nanoscale stochastic thermodynamics, specifically the generalization and extension of the principle of microscopic reversibility to the single-molecule regime, has enhanced the understanding of the design requirements for achieving strong unidirectional motion and high efficiency of these synthetic molecular machines for harnessing energy from external fluctuations to carry out mechanical and/or chemical functions in their environment. A key insight is that the interaction between the fluctuations and the transition state energies plays a central role in determining the steady-state concentrations. Kinetic asymmetry, a requirement for stochastic adaptation, occurs when there is an imbalance in the effect of the fluctuations on the forward and reverse rate constants. Because of strong viscosity, the motions of the machine can be viewed as mechanical equilibrium processes where mechanical resonances are simply impossible but where the probability distributions for the state occupancies and trajectories are very different from those that would be expected at thermodynamic equilibrium.

Long-distance axonal transport is critical to the maintenance and function of neurons. Robust transport is ensured by the coordinated activities of multiple molecular motors acting in a team. Conventional live-cell imaging techniques used in axonal transport studies detect this activity by visualizing the translational dynamics of a cargo. However, translational measurements are insensitive to torques induced by motor activities. By using gold nanorods and multichannel polarization microscopy, we simultaneously measure the rotational and translational dynamics for thousands of axonally transported endosomes. We find that the rotational dynamics of an endosome provide complementary information regarding molecular motor activities to the conventionally tracked translational dynamics. Rotational dynamics correlate with translational dynamics, particularly in cases of increased rotation after switches between kinesin- and dynein-mediated transport. Furthermore, unambiguous measurement of nanorod angle shows that endosome-contained nanorods align with the orientation of microtubules, suggesting a direct mechanical linkage between the ligand-receptor complex and the microtubule motors.

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.

Male and female gametes differing in size—anisogamy—emerged independently from isogamous ancestors in various eukaryotic lineages, although genetic bases of this emergence are still unknown. Volvocine green algae are a model lineage for investigating the transition from isogamy to anisogamy. Here we focus on two closely related volvocine genera that bracket this transition—isogamous Yamagishiella and anisogamous Eudorina. We generated de novo nuclear genome assemblies of both sexes of Yamagishiella and Eudorina to identify the dimorphic sex-determining chromosomal region or mating-type locus (MT) from each. In contrast to the large (>1 Mb) and complex MT of oogamous Volvox, Yamagishiella and Eudorina MT are smaller (7–268 kb) and simpler with only two sex-limited genes—the minus/male-limited MID and the plus/female-limited FUS1. No prominently dimorphic gametologs were identified in either species. Thus, the first step to anisogamy in volvocine algae presumably occurred without an increase in MT size and complexity.

Acknowledgements

We thank the staff of Comparative Genomics Laboratory at NIG for supporting genome sequencing. Computations were partially performed on the NIG supercomputer at ROIS National Institute of Genetics. This work was supported by a Grants-in-Aid for Scientific Research on Innovative Areas “Genome Science” (grant number 221S0002; to A.T. and A.F.), Scientific Research (A) (grant number 16H02518; to H.Nozaki), Research Activity Startup grants (grant number 16H06734 to T.H.), Scientific Research (C) (grant number 17K07510 to H.K.-T.), Grant-in-Aid for Scientific Research on Innovative Areas (grant number 17H05840 to T.H.) from MEXT/JSPS KAKENHI, and National Institutes of Health (grant number GM 078376 to J.G.U.).

It is undeniably very logical to first formulate an unambiguous definition of “Life” before engaging in defining the parameters instrumental to Life's evolution. Because nearly everybody assumes, erroneously in my opinion, that catching Life's essence in a single sentence is impossible, this way of thinking remained largely unexplored in evolutionary theory. Upon analyzing what exactly happens at the transition from “still alive” to “just dead,” the following definition emerged. What we call “Life” (L) is an activity. It is nothing other than the total sum (∑) of all communication acts (C) executed, at moment t, by entities organized as sender-receiver compartments: L = ∑C Such “living” entities are self-electrifying and talking ( = communicating) aggregates of fossil stardust operating in an environment heavily polluted by toxic calcium. Communication is a multifaceted, complex process that is seldom well explained in introductory textbooks of biology. Communication is instrumental to adaptation because, at the cellular level, any act of communication is in fact a problem-solving act. It can be logically deduced that not Natural Selection itself but communication/problem-solving activity preceding selection is the universal driving force of evolution. This is against what textbooks usually claim, although doubt on the status of Natural Selection as driving force has been around for long. Finally, adopting the sender-receiver with its 2 memory systems (genetic and cognitive, both with their own rules) and 2 types of progeny (”physical children” and “pupils”) as the universal unit of architecture and function of all living entities, also enables the seamless integration of cultural and organic evolution, another long-standing tough problem in evolutionary theory. Paraphrasing Theodosius Dobzhansky, the very essence of biology is: “Nothing in biology and evolutionary theory makes sense except in the light of the ability of living matter to communicate, and by doing so, to solve problems.”

Human genome function is underpinned by the primary storage of genetic information in canonical B-form DNA, with a second layer of DNA structure providing regulatory control. I-motif structures are thought to form in cytosine-rich regions of the genome and to have regulatory functions; however, in vivo evidence for the existence of such structures has so far remained elusive. Here we report the generation and characterization of an antibody fragment (iMab) that recognizes i-motif structures with high selectivity and affinity, enabling the detection of i-motifs in the nuclei of human cells. We demonstrate that the in vivo formation of such structures is cell-cycle and pH dependent. Furthermore, we provide evidence that i-motif structures are formed in regulatory regions of the human genome, including promoters and telomeric regions. Our results support the notion that i-motif structures provide key regulatory roles in the genome.

(This article belongs to the Special Issue Biology in the Early 21st Century: Evolution Beyond Selection)

Abstract

In functional genomics studies, research is dedicated to unveiling the function of genes using gene-knockouts, model organisms in which a gene is artificially inactivated. The idea is that, by knocking out the gene, the provoked phenotype would inform us about the function of the gene. Still, the function of many genes cannot be elucidated, because disruption of conserved sequences, including protein-coding genes, often does not directly affect the phenotype. Since the phenomenon was first observed in the early nineties of the last century, these so-called ‘no-phenotype knockouts’ have met with great skepticism and resistance by died-in-the-wool selectionists. Still, functional genomics of the late 20th and early 21st centuries has taught us two important lessons. First, two or more unrelated genes can often substitute for each other; and second, some genes are only present in the genome in a silent state. In the laboratory, the disruption of such genes does not negatively influence reproductive success, and does not show measurable fitness effects of the species. The genes are redundant. Genetic redundancy, one of the big surprises of modern biology, can thus be defined as the condition in which the inactivation of a gene is selectively neutral. The no-phenotype knockout is not just a freak of the laboratory. Genetic variants known as homozygous loss-of-function (HLOF) variants are of considerable scientific and clinical interest, as they represent experiments of nature qualifying as “natural knockouts”. Such natural knockouts challenge the conventional NeoDarwinian appraisal that genetic information is the result of natural selection acting on random genetic variation.

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

2 National Center for Biotechnology Information, National Library of Medicine, Bethesda, MD 20894, United States of America

Dates Received 28 August 2017 Accepted 30 January 2018

Published 23 February 2018

Citation

Mikhail I Katsnelson et al 2018 Phys. Scr. 93 043001

DOI https://doi.org/10.1088/1402-4896/aaaba4

Abstract

Biological systems reach organizational complexity that far exceeds the complexity of any known inanimate objects. Biological entities undoubtedly obey the laws of quantum physics and statistical mechanics. However, is modern physics sufficient to adequately describe, model and explain the evolution of biological complexity? Detailed parallels have been drawn between statistical thermodynamics and the population-genetic theory of biological evolution. Based on these parallels, we outline new perspectives on biological innovation and major transitions in evolution, and introduce a biological equivalent of thermodynamic potential that reflects the innovation propensity of an evolving population. Deep analogies have been suggested to also exist between the properties of biological entities and processes, and those of frustrated states in physics, such as glasses. Such systems are characterized by frustration whereby local state with minimal free energy conflict with the global minimum, resulting in 'emergent phenomena'. We extend such analogies by examining frustration-type phenomena, such as conflicts between different levels of selection, in biological evolution. These frustration effects appear to drive the evolution of biological complexity. We further address evolution in multidimensional fitness landscapes from the point of view of percolation theory and suggest that percolation at level above the critical threshold dictates the tree-like evolution of complex organisms. Taken together, these multiple connections between fundamental processes in physics and biology imply that construction of a meaningful physical theory of biological evolution might not be a futile effort. However, it is unrealistic to expect that such a theory can be created in one scoop; if it ever comes to being, this can only happen through integration of multiple physical models of evolutionary processes. Furthermore, the existing framework of theoretical physics is unlikely to suffice for adequate modeling of the biological level of complexity, and new developments within physics itself are likely to be required.

An ambitious study in yeast shows that the health of cells depends on the highly intertwined effects of many genes, few of which can be deleted together without consequence.

The activities of genes in complex organisms, including humans, may be deeply interrelated.

Olena Shmahalo/Quanta Magazine; Model by: TheEmptyRoom

By knocking out genes three at a time, scientists have painstakingly deduced the web of genetic interactions that keeps a cell alive. Researchers long ago identified essential genes that yeast cells can’t live without, but new work, which appears today in Science, shows that looking only at those gives a skewed picture of what makes cells tick: Many genes that are inessential on their own become crucial as others disappear. The result implies that the true minimum number of genes that yeast — and perhaps, by extension, other complex organisms — need to survive and thrive may be surprisingly large.

About 20 years ago, Charles Boone and Brenda Andrews decided to do something slightly nuts. The yeast biologists, both professors at the University of Toronto, set out to systematically destroy or impair the genes in yeast, two by two, to get a sense of how the genes functionally connected to one another. Only about 1,000 of the 6,000 genes in the yeast genome, or roughly 17 percent, are considered essential for life: If a single one of them is missing, the organism dies. But it seemed that many other genes whose individual absence was not enough to spell the end might, if destroyed in tandem, sicken or kill the yeast. Those genes were likely to do the same kind of job in the cell, the biologists reasoned, or to be involved in the same process; losing both meant the yeast could no longer compensate.

Ignorant as science may still be about certain happenings in yeast, it’s dwarfed by our ignorance of what is going on in our own cells

Boone and Andrews realized they could use this idea to figure out what various genes were doing. They and their collaborators went about it deliberately, by first generating more than 20 million strains of yeast that were each missing two genes — almost all of the unique combinations of knockouts among those 6,000 genes. The researchers then scored how healthy each of the double mutant strains was and investigated how the missing genes could be related. The results let the researchers sketch a map of the shadowy web of interactions that underlie life. Two years ago, they reported the details of the map and revealed that it had already allowed researchers to discover previously unknown roles for genes.

Along the way, however, they realized that a surprising number of genes in the experiment didn’t have any obvious interactions with others. “Maybe, in some cases, deleting two genes isn’t enough,” Andrews said, reflecting on their thoughts at the time. Elena Kuzmin, a graduate student in the lab who is now a postdoc at McGill University, decided to go one step further by knocking out a third gene.

In the paper out today in Science, Kuzmin, Boone, Andrews and their collaborators at the University of Toronto, the University of Minnesota and elsewhere report that effort has yielded a deeper and more detailed map of the cell’s inner workings. Unlike in the double mutant experiments, the researchers did not make every possible combination of mutations — there are about 36 billion different ways to knock out three genes in yeast. Instead, they looked at the pairs of genes they’d already knocked out and ranked their interactions according to severity. They took a number of those pairs, whose effects ranged from making cells grow a little slower to making them significantly impaired, and matched them up one by one with knockouts of other genes, generating about 200,000 triple mutant strains. They monitored how quickly colonies of the mutant yeast grew, and after noting which mutants were struggling, they checked databases to see what the disabled genes were thought to do.

Dinosaurs diversified in two steps during the Triassic. They originated about 245 Ma, during the recovery from the Permian-Triassic mass extinction, and then remained insignificant until they exploded in diversity and ecological importance during the Late Triassic. Hitherto, this Late Triassic explosion was poorly constrained and poorly dated. Here we provide evidence that it followed the Carnian Pluvial Episode (CPE), dated to 234–232 Ma, a time when climates switched from arid to humid and back to arid again. Our evidence comes from a combined analysis of skeletal evidence and footprint occurrences, and especially from the exquisitely dated ichnofaunas of the Italian Dolomites. These provide evidence of tetrapod faunal compositions through the Carnian and Norian, and show that dinosaur footprints appear exactly at the time of the CPE. We argue then that dinosaurs diversified explosively in the mid Carnian, at a time of major climate and floral change and the extinction of key herbivores, which the dinosaurs opportunistically replaced.

M.B. and P.G. designed the study. M.B., F.M.P., and M.J.B. developed the palaeontological parts of the study, while P.G. and P.M. contributed in the more geological sections. All authors interpreted the results. M.B. and P.G. led the writing of the paper and all other co-authors contributed to the final version.

"Even in such fields as science, where reason is supposed to be most at home, there is a tendency for theories to become dogmas. Darwin's hypothesis of chance variations and natural selection became not merely a dogma of science, but was erected into a philosophy of the universe; and the limitations of the hypothesis and the empirical spirit of its creator were lost sight of in an intolerant tradition which has had serious consequences, not only for the development of natural science but for social philosophy. In every field of science we are haunted by ghosts of the past to which lesser minds pay superstitious reverence and by which even great minds are misled into false assumptions."

Edited by Solomon H. Snyder, The Johns Hopkins University School of Medicine, Baltimore, MD, and approved March 21, 2018 (received for review July 28, 2017)

Source/Fonte: SUNO - Southern University at New Orleans

Abstract

Forensic science is critical to the administration of justice. The discipline of forensic science is remarkably complex and includes methodologies ranging from DNA analysis to chemical composition to pattern recognition. Many forensic practices developed under the auspices of law enforcement and were vetted primarily by the legal system rather than being subjected to scientific scrutiny and empirical testing. Beginning in the 1990s, exonerations based on DNA-related methods revealed problems with some forensic disciplines, leading to calls for major reforms. This process generated a National Academy of Science report in 2009 that was highly critical of many forensic practices and eventually led to the establishment of the National Commission for Forensic Science (NCFS) in 2013. The NCFS was a deliberative body that catalyzed communication between nonforensic scientists, forensic scientists, and other stakeholders in the legal community. In 2017, despite continuing problems with forensic science, the Department of Justice terminated the NCFS. Just when forensic science needs the most support, it is getting the least. We urge the larger scientific community to come to the aid of our forensic colleagues by advocating for urgently needed research, testing, and financial support.

quinta-feira, abril 12, 2018

The American Physical Society (APS) is proud to celebrate the 125th anniversary of the Physical Review journals. To commemorate this milestone, the editors present a timeline of select papers and events that are of significance to physics and to the history of the APS. From Robert Millikan’s famous oil drop experiments to the discovery of gravitational waves, the Physical Review journals have published a wide range of important results, many of which have been recognized with Nobel and other notable prizes. The papers in the timeline, along with landmark events in the history of the Physical Review, will be highlighted on our journal websites and in social media throughout 2018.

Lepidopteran scales exhibit remarkably complex ultrastructures, many of which produce structural colors that are the basis for diverse communication strategies. Little is known, however, about the early evolution of lepidopteran scales and their photonic structures. We report scale architectures from Jurassic Lepidoptera from the United Kingdom, Germany, Kazakhstan, and China and from Tarachoptera (a stem group of Amphiesmenoptera) from mid-Cretaceous Burmese amber. The Jurassic lepidopterans exhibit a type 1 bilayer scale vestiture: an upper layer of large fused cover scales and a lower layer of small fused ground scales. This scale arrangement, plus preserved herringbone ornamentation on the cover scale surface, is almost identical to those of some extant Micropterigidae. Critically, the fossil scale ultrastructures have periodicities measuring from 140 to 2000 nm and are therefore capable of scattering visible light, providing the earliest evidence of structural colors in the insect fossil record. Optical modeling confirms that diffraction-related scattering mechanisms dominate the photonic properties of the fossil cover scales, which would have displayed broadband metallic hues as in numerous extant Micropterigidae. The fossil tarachopteran scales exhibit a unique suite of characteristics, including small size, elongate-spatulate shape, ridged ornamentation, and irregular arrangement, providing novel insight into the early evolution of lepidopteran scales. Combined, our results provide the earliest evidence for structural coloration in fossil lepidopterans and support the hypothesis that fused wing scales and the type 1 bilayer covering are groundplan features of the group. Wing scales likely had deep origins in earlier amphiesmenopteran lineages before the appearance of the Lepidoptera.

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.

We show that the claimed confirmed planet Kepler-452b (a.k.a. K07016.01, KIC 8311864) can not be confirmed using a purely statistical validation approach. Kepler detects many more periodic signals from instrumental effects than it does from transits, and it is likely impossible to confidently distinguish the two types of event at low signal-to-noise. As a result, the scenario that the observed signal is due to an instrumental artifact can't be ruled out with 99\% confidence, and the system must still be considered a candidate planet. We discuss the implications for other confirmed planets in or near the habitable zone.