Many of the communication tools of today rely on the function of light or, more specifically, on applying information to a light wave. Up until now, studies on electronic and optical devices with materials that are the foundations of modern electronics—such as radio, TV, and computers—have generally relied on nonlinear optical effects, producing devices whose bandwidth has been limited to the gigahertz (GHz) frequency region. (Hertz stands for cycles per second of a periodic phenomenon, in this case 1billion cycles). Thanks to research performed at the University of Pittsburgh, a physical basis for terahertz bandwidth (THz, or 1 trillion cycles per second)—the portion of the electromagnetic spectrum between infrared and microwave light—has now been demonstrated.

NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.

The latest AI program developed by DeepMind is not only brilliant and remarkably flexible—it’s also quite weird.

DeepMind published a paper this week describing a game-playing program it developed that proved capable of mastering chess and the Japanese game Shoju, having already mastered the game of Go.

Demis Hassabis, the founder and CEO of DeepMind and an expert chess player himself, presented further details of the system, called Alpha Zero, at an AI conference in California on Thursday. The program often made moves that would seem unthinkable to a human chess player.

“It doesn’t play like a human, and it doesn’t play like a program,” Hassabis said at the Neural Information Processing Systems (NIPS) conference in Long Beach. “It plays in a third, almost alien, way.”

Besides showing how brilliant machine-learning programs can be at a specific task, this shows that artificial intelligence can be quite different from the human kind. As AI becomes more commonplace, we might need to be conscious of such “alien” behavior.

AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help An upgraded version of the game-playing AI teaches itself every trick in the Go book, using a new form of machine learning.

Alpha Zero is a more general version of AlphaGo, the program developed by DeepMind to play the board game Go. In 24 hours, Alpha Zero taught itself to play chess well enough to beat one of the best existing chess programs around.

What’s also remarkable, though, Hassabis explained, is that it sometimes makes seemingly crazy sacrifices, like offering up a bishop and queen to exploit a positional advantage that led to victory. Such sacrifices of high-value pieces are normally rare. In another case the program moved its queen to the corner of the board, a very bizarre trick with a surprising positional value. “It’s like chess from another dimension,” Hassabis said.

Hassabis speculates that because Alpha Zero teaches itself, it benefits from not following the usual approach of assigning value to pieces and trying to minimize losses. “Maybe our conception of chess has been too limited,” he said. “It could be an important moment for chess. We can graft it into our own play.”

A new method by which to 3D print metals, involving an extensively used stainless steel, has been shown to realize exceptional levels of both ductility and strength, when compared to counterparts from more conventional processes.

The research is opposing to the skepticism surrounding the ability to make robust and ductile metals using 3D printing, and as such the discovery is vital to moving the technology forward for the manufacturing of heavy duty components.

3D printing has long been accepted as a technology which can possibly transform the way of manufacturing, allowing one to quickly construct objects with intricate and tailored geometries.

Indeed, the manufacturing leader General Electric (GE) has already been using metal 3D printing to create certain key parts, such as the fuel nozzles in their newest LEAP aircraft engine. The technology helps GE to minimize 900 separate parts into just 16, and make fuel nozzles 60% cheaper and 40% lighter.

The worldwide revenue from the industry is predicted to be more than 20 billion USD per year by the year 2025. Regardless of the bright future, the quality of the products from metal 3D printing has been susceptible to skepticism. In majority of metal 3D printing processes, products are directly made from metal powders, which make it prone to defects, therefore causing weakening of mechanical properties.

Dr. Leifeng Liu, who is the key participant of the project, lately moved to the University of Birmingham from Stockholm University as an AMCASH research fellow. He said, “Strength and ductility are natural enemies of one another, most methods developed to strengthen metals consequently reduce ductility.”

Somewhere in our galaxy, an exoplanet is probably orbiting a star that’s colder than our sun, but instead of freezing solid, the planet might be cozy warm thanks to a greenhouse effect caused by methane in its atmosphere.

NASA astrobiologists from the Georgia Institute of Technology have developed a comprehensive new model that shows how planetary chemistry could make that happen. The model, published in a new study in the journal Nature Geoscience, was based on a likely scenario on Earth three billion years ago and was actually built around its possible geological and biological chemistry.

The sun produced a quarter less light and heat then, but Earth remained temperate, and methane may have saved our planet from an eon-long deep-freeze, scientists hypothesize. Had it not, we and most other complex life probably wouldn’t be here today.

The new model combined multiple microbial metabolic processes with volcanic, oceanic and atmospheric activities, which may make it the most comprehensive of its kind to date. But while studying Earth’s distant past, the Georgia Tech researchers aimed their model light-years away, wanting it to someday help interpret conditions on recently discovered exoplanets.

The researchers set the model’s parameters broadly so that they could apply not only to our own planet but potentially also to its siblings with their varying sizes, geologies, and lifeforms.

The species would have struggled to survive even without human contact, research finds.

Australian scientists sequenced the genome of the native marsupial, also known as the thylacine. It showed the species, alive until 1936, would have struggled to survive even without human contact. The research also provides further insights into the marsupial's unique appearance.

"Even if we hadn't hunted it to extinction, our analysis showed that the thylacine was in very poor [genetic] health," said lead researcher Dr Andrew Pask, from the University of Melbourne. "The population today would be very susceptible to diseases, and would not be very healthy."

He said problems with genetic diversity could be traced back as far as 70,000 years ago, when the population is thought to have suffered due to a climatic event. The researchers sequenced the genome from a 106-year-old specimen held by Museums Victoria. They said their study, published in the journal Nature Ecology and Evolution, is one of the most complete genetic blueprints of an extinct species.

A team of scientists at Caltech has figured out a way to encode more than one holographic image in a single surface without any loss of resolution. The engineering feat overturns a long-held assumption that a single surface could only project a single image regardless of the angle of illumination.

The technology hinges on the ability of a carefully engineered surface to reflect light differently depending on the angle at which incoming light strikes that surface.

Holograms are three-dimensional images encoded in two-dimensional surfaces. When the surface is illuminated with a laser, the image seems to pop off the surface and becomes visible. Traditionally, the angle at which laser light strikes the surface has been irrelevant—the same image will be visible regardless. That means that no matter how you illuminate the surface, you will only create one hologram.

Led by Andrei Faraon, assistant professor of applied physics and materials science in the Division of Engineering and Applied Science, the team developed silicon oxide and aluminum surfaces studded with tens of millions of tiny silicon posts, each just hundreds of nanometers tall. (For scale, a strand of human hair is 100,000 nanometers wide.) Each nanopost reflects light differently due to variations in its shape and size, and based on the angle of incoming light.

That last property allows each post to act as a pixel in more than one image: for example, acting as a black pixel if incoming light strikes the surface at 0 degrees and a white pixel if incoming light strikes the surface at 30 degrees.

"Each post can do double duty. This is how we're able to have more than one image encoded in the same surface with no loss of resolution," says Faraon (BS '04), senior author of a paper on the new material published by Physical Review X on December 7, 2017.

Just like humans, plants can succumb to the effects of general anesthetic drugs, researchers report this week in the Annals of Botany. The finding is striking for a variety of reasons—there’s the pesky fact that plants lack a central nervous system, for one thing. But, perhaps more noteworthy is that scientists still aren’t sure how general anesthetics work on humans—let alone plants.

Despite that, doctors have been using the drugs daily for more than a century to knock people out and avert pain during surgeries and other medical procedures. Yet the drugs’ exact effects on our body’s cells and electrical signals remain elusive.

The authors of the new study, led by Italian and German plant biologists, suggest that plants could help us—once and for all—figure out the drugs’ mechanism of action. Moreover, the researchers are hopeful that after that’s sorted out, plants could be a useful tool to study and develop new anesthetic drugs. “As plants in general, and the model plant Arabidopsisthaliana in particular, are suitable to experimental manipulation (they do not run away) and allow easy electrical recordings, we propose them as ideal model objects to study anaesthesia and to serve as a suitable test system for human anaesthesia,” they conclude.

The researchers exposed the plants to a few different general anesthetics, in a few different ways. They enclosed some in chambers where they were surrounded by diethyl ether vapor or xenon gas. For some, the researchers washed their roots and exposed them to lidocaine.

In all cases, the anesthetics temporarily caused the plants to go still and unresponsive. The Venus flytrap's spikey trap didn’t slam shut when poked. The shy plant was no longer shy; it’s leaves stayed open when gently brushed. Similarly, the sundew plants didn’t bend to capture dead fruit flies and the pea plant’s tendrils drooped and curled up instead of whirling in normal upward fashion.

A camera system that captures a snapshot of overlapping light waves in a tiny fraction of a second could lead to new methods for imaging, allowing scientists to watch the brain’s neurons interacting or see neutrinos colliding with matter.

The camera system took snapshots at a rate of 100 billion frames per second, fast enough to capture a pulse of laser light spreading out in a Mach cone, the optical equivalent of the sonic boom created by an airplane traveling faster than the speed of sound. “You can think of the laser source as the supersonic jet and everything is dragged behind. Instead of generating a sound, we’re generating a scattered wavelet,” says Jinyang Liang, a postdoctoral research associate in Lihong Wang’s Optical Imaging Lab at Washington University, in St. Louis. The researchers and their collaborators from Tsinghua University in China and the University of Illinois at Urbana-Champaign describe their work in today’s issue of Science Advances.

An airplane creates a Mach cone when it passes Mach 1, the speed of sound. Because the source of the noise—the plane’s engines—is moving faster than sound itself, the sound waves get compressed and spread out in a cone shape behind the aircraft. The same thing can happen to light.

To generate their optical Mach cone, the researchers made two silicone display panels, which they laced with aluminum oxide powder to scatter the light toward the cameras. They placed the panels on opposite sides of an air-filled tunnel, then threw in a chunk of dry ice to create a fog meant to scatter light. The researchers then fired a laser beam through the tunnel. Because the silicone has a higher index of refraction than the air, light striking the panels moves more slowly than the light striking the fog, so the source of the light waves is “moving faster” than the waves in the silicone are, the same as with the supersonic jet.

Researchers have discovered that a protein implicated in human longevity may also play a role in restoring hearing after noise exposure. The findings, where were published in the journal Scientific Reports, could one day provide researchers with new tools to prevent hearing loss.

The study reveals that a gene called Forkhead Box O3 (Foxo3) appears to play a role in protecting outer hair cells in the inner ear from damage. The outer hair cells act as a biological sound amplifier and are critical to hearing. When exposed to loud noises, these cells undergo stress. In some individuals, these cells are able to recover, but in others the outer hair cells die, permanently impairing hearing. While hearing aids and other treatments can help recovered some range of hearing, there is currently no biological cure for hearing loss.

"While more than a hundred genes have been identified as being involved in childhood hearing loss, little is known about the genes that regulate hearing recovery after noise exposure," said Patricia White, Ph.D., a research associate professor in the University of Rochester Medical Center (URMC) Department of Neuroscience and lead author of the study. "Our study shows that Foxo3 could play an important role in determining which individuals might be more susceptible to noise-induced hearing loss."

Approximately one-third of people who reach retirement age have some degree of hearing loss, primarily due to noise exposure over their lifetimes. The problem is even more acute in the military, with upwards of 60 percent of individuals who have been deployed in forward areas experiencing hearing loss, making it the most common disability for combat veterans.

Foxo3 is known to play an important role in cell's stress response. For example, in the cardiovascular system, Foxo3 helps heart cells stay healthy by clearing away debris when the cells are damaged. Additionally, people with a genetic mutation that confers higher levels of Foxo3 protein have been shown to live longer.

White and her team carried out a series of experiments involving knock-out mice who were genetically engineered to lack the Foxo3 gene. The researchers found that, compared to normal mice, these animals were unable to recover hearing after being exposed to loud noises. The team also observed that during the experiment the Foxo3 knock-out mice lost most of their outer hair cells. In the normal mice, outer hair cell loss was not significant.

"Discovering that Foxo3 was important for the survival of outer hair cells is a significant advance," says senior author Patricia White. "We are also excited about the results because Foxo3 is a transcription factor, which regulates the expression of many target genes. We are currently investigating what its targets might be in the inner ear, and how they could act to protect the ear from damage."

“What’s better than a robot inspired by bees? A robot inspired by bees that can swim.Researchers led by a team at Harvard University have developed a tiny, 175-milligram (about two feathers) device with insect-inspired wings that can both flap and rotate, allowing it to either fly above the ground or swim in shallow waters and easily transition between the two.”

Most CRISPR/Cas9 systems work by creating "double-strand breaks" (DSBs) in regions of the genome targeted for editing or for deletion, but many researchers are opposed to creating such breaks in the DNA of living humans. As a proof of concept, the Salk group used their new approach to treat several diseases, including diabetes, acute kidney disease, and muscular dystrophy, in mouse models.

"Although many studies have demonstrated that CRISPR/Cas9 can be applied as a powerful tool for gene therapy, there are growing concerns regarding unwanted mutations generated by the double-strand breaks through this technology," says Juan Carlos Izpisua Belmonte, a professor in Salk's Gene Expression Laboratory and senior author of the new paper, published in Cell on December 7, 2017. "We were able to get around that concern."

In the original CRISPR/Cas9 system, the enzyme Cas9 is coupled with guide RNAs that target it to the right spot in the genome to create DSBs. Recently, some researchers have started using a "dead" form of Cas9 (dCas9), which can still target specific places in the genome, but no longer cuts DNA. Instead, dCas9 has been coupled with transcriptional activation domains—molecular switches—that turn on targeted genes. But the resulting protein—dCas9 attached to the activator switches—is too large and bulky to fit into the vehicle typically used to deliver these kinds of therapies to cells in living organisms, namely adeno-associated viruses (AAVs). The lack of an efficient delivery system makes it very difficult to use this tool in clinical applications.

By thinking of organic chemistry as words and sentences instead of atoms and molecules, researchers have found a way for artificial intelligence to predict chemical reactions.

In a paper published on arXiv by researchers at IBM and being presented at this week’s Neural Information Processing Systems (NIPS) conference, the researchers demonstrate that by treating reaction predictions as a translation problem, they could come up with the correct reaction more often than was possible with previous models.

“Intuitively, there is an analogy between a chemist’s understanding of a compound and a language speaker’s understanding of a word,” the researchers write.

Using a neural network often used in machine translation, the researchers trained the system on a data set that included 395,496 reactions. From that data, the neural net had to learn the “syntax” of reactions to be able to predict unseen compounds. The algorithm gave researchers a list of the top five most likely reactions, and the top prediction was correct 80 percent of the time, beating another model that tried to predict reactions by six percentage points.

There are millions of chemical reactions that have yet to be documented, so this approach could help speed up research for things like drug discovery. But researchers say that as more data gets added to the models, more double-checking will have to take place. Teodoro Laino, one of the researchers, told IEEE Spectrum that they “didn't create this tool to replace organic chemists, but to help them.”

AI tools could help us turn information gleaned from genetic sequencing into life-saving therapies. Almost 15 years after scientists first sequenced the human genome, making sense of the enormous amount of data that encodes human life remains a formidable challenge. But it is also precisely the sort of problem that machine learning excels at.

Google has now released a tool called DeepVariant that uses the latest AI techniques to build a more accurate picture of a person’s genome from sequencing data. DeepVariant helps turn high-throughput sequencing readouts into a picture of a full genome. It automatically identifies small insertion and deletion mutations and single-base-pair mutations in sequencing data.

High-throughput sequencing became widely available in the 2000s and has made genome sequencing more accessible. But the data produced using such systems has offered only a limited, error-prone snapshot of a full genome. It is typically challenging for scientists to distinguish small mutations from random errors generated during the sequencing process, especially in repetitive portions of a genome. These mutations may be directly relevant to diseases such as cancer.

A number of tools exist for interpreting these readouts, including GATK, VarDict, and FreeBayes. However, these software programs typically use simpler statistical and machine-learning approaches to identifying mutations by attempting to rule out read errors. “One of the challenges is in difficult parts of the genome, where each of the tools has strengths and weaknesses,” says Brad Chapman, a research scientist at Harvard’s School of Public Health who tested an early version of DeepVariant. “These difficult regions are increasingly important for clinical sequencing, and it’s important to have multiple methods.”

DeepVariant was developed by researchers from the Google Brain team, a group that focuses on developing and applying AI techniques, and Verily, another Alphabet subsidiary that is focused on the life sciences. The team collected millions of high-throughput reads and fully sequenced genomes from the Genome in a Bottle (GIAB) project, a public-private effort to promote genomic sequencing tools and techniques. They fed the data to a deep-learning system and painstakingly tweaked the parameters of the model until it learned to interpret sequenced data with a high level of accuracy.

Last year, DeepVariant won first place in the PrecisionFDA Truth Challenge, a contest run by the FDA to promote more accurate genetic sequencing. “The success of DeepVariant is important because it demonstrates that in genomics, deep learning can be used to automatically train systems that perform better than complicated hand-engineered systems,” says Brendan Frey, CEO of Deep Genomics.

The release of DeepVariant is the latest sign that machine learning may be poised to boost progress in genomics. Deep Genomics is one of several companies trying to use AI approaches such as deep learning to tease out genetic causes of diseases and to identify potential drug therapies (see “An AI-Driven Genomics Company Is Turning to Drugs”).

Deep Genomics aims to develop drugs by using deep learning to find patterns in genomic and medical data. Frey says AI will eventually go well beyond helping to sequence genomic data. “The gap that is currently blocking medicine right now is in our inability to accurately map genetic variants to disease mechanisms and to use that knowledge to rapidly identify life-saving therapies,” he says.

Another prominent company in this area is Wuxi Nextcode, which has offices in Shanghai, Reykjavik, and Cambridge, Massachusetts. Wuxi Nextcode has amassed the world’s largest collection of fully sequenced human genomes, and the company is investing heavily in machine-learning methods.

DeepVariant will also be available on the Google Cloud Platform. Google and its competitors are furiously adding machine-learning features to their cloud platforms in an effort to lure anyone who might want to tap into the latest AI techniques (see “Ambient AI Is About to Devour the Software Industry”).

Using light-emitting nanoparticles, Rutgers University-New Brunswick scientists have invented a highly effective method to detect tiny tumors and track their spread, potentially leading to earlier cancer detection and more precise treatment.

The technology, announced today, could improve patient cure rates and survival times.

“We’ve always had this dream that we can track the progression of cancer in real time, and that’s what we’ve done here,” said Prabhas V. Moghe, a corresponding author of the study and distinguished professor of biomedical engineering and chemical and biochemical engineering at Rutgers–New Brunswick. “We’ve tracked the disease in its very incipient stages.”

“The Achilles’ heel of surgical management for cancer is the presence of micro metastases. This is also a problem for proper staging or treatment planning. The nanoprobes described in this paper will go a long way to solving these problems,” said Dr. Steven K. Libutti, director of Rutgers Cancer Institute of New Jersey. He is senior vice president of oncology services for RWJBarnabas Health and vice chancellor for cancer programs for Rutgers Biomedical and Health Sciences.

The ability to spot early tumors that are starting to spread remains a major challenge in cancer diagnosis and treatment, as most imaging methods fail to detect small cancerous lesions. But the Rutgers study shows that tiny tumors in mice can be detected with the injection of nanoprobes, which are microscopic optical devices, that emit short-wave infrared light as they travel through the bloodstream – even tracking tiny tumors in multiple organs.

The nanoprobes were significantly faster than MRIs at detecting the minute spread of tiny lesions and tumors in the adrenal glands and bones in mice. That would likely translate to detection months earlier in people, potentially resulting in saved lives, said Vidya Ganapathy, a corresponding author and assistant research professor in the Department of Biomedical Engineering.

Just imagine: An optical lens so powerful that it lets you view features the size of a small virus on the surface of a living cell in its natural environment.

Construction of instruments with this capability is now possible because of a fundamental advance in the quality of an optical material used in hyperlensing, a method of creating lenses that can resolve objects much smaller than the wavelength of light. The achievement was reported by a team of researchers led by Joshua Caldwell, associate professor of mechanical engineering at Vanderbilt University, in a paper published Dec. 11 in the journal Nature Materials.

The optical material involved is hexagonal boron nitride (hBN), a natural crystal with hyperlensing properties. The best previously reported resolution using hBN was an object about 36 times smaller than the infrared wavelength used: about the size of the smallest bacteria. The new paper describes improvements in the quality of the crystal that enhance its potential imaging capability by about a factor of ten.

The researchers achieved this enhancement by making hBN crystals using isotopically purified boron. Natural boron contains two isotopes that differ in weight by about 10 percent, a combination that significantly degrades the crystal's optical properties in the infrared.

"We have demonstrated that the inherent efficiency limitations of hyperlenses can be overcome through isotopic engineering," said team member Alexander Giles, research physicist at the the U.S. Naval Research Laboratory. "Controlling and manipulating light at nanoscale dimensions is notoriously difficult and inefficient. Our work provides a new path forward for the next generation of materials and devices."

The researchers calculate that a lens made from their purified crystal can in principle capture images of objects as small as 30 nanometers in size. To put this in perspective, there are 25 million nanometers in an inch and human hair ranges from 80,000 to 100,000 nanometers in diameter. A human red blood cell is about 9,000 nanometers and viruses range from 20 to 400 nanometers.

Marine biologist Ruth Gates sat down in an oversized wooden rocking chair at an oceanside resort here last week to talk about the next frontier in coral science and a new hope for saving coral reefs reeling from climate change: genetic technology.

“There are hundreds of species of coral, all with complex biologies and physiological traits that vary based on their DNA and environment,” Gates, director of the Hawaii Institute of Marine Biology, said while seated on a sprawling lanai overlooking acres of coral reefs awash in turquoise waters.

“Using genetic technology to identify corals resilient to environmental stressors may allow us to save corals – which are some of the most threatened organisms on Earth,” added Gates, a leading coral scientist who was featured in the new documentary “Chasing Coral.”

Coral reefs provide habitat to a quarter of the world’s marine species and are crucial sources of food and income to hundreds of millions of people. While corals are typically hardy creatures, rising ocean temperatures, acidification and pollution are harming corals on a scale not seen in recorded history. The world has lost about 50 percent of its coral reefs in just the past three decades, and in the next three decades it’s expected to lose more than 40 percent more. The unprecedented back-to-back coral bleaching events of 2014–17 devastated coral reefs worldwide.

According to Gates and other marine scientists, identifying both weak and resilient coral species is imperative to protect surviving reefs and help others recover. But cataloging corals with traditional visualization techniques can be challenging because even individuals belonging to the same species can be quite variable in appearance and react in different ways to the same environmental stressors.

Optical illusions harness the shift between what your eyes see and what your brain perceives. They reveal the way your visual system edits images before you're even made aware of them like a personal assistant, deciding what is and isn't worthy of your attention.

People were creating optical illusions long before we knew what made them work. Today, advances in neuroscience have pinpointed the visual processes that fool your brain into falling for many of them. Others still elude explanation. Here, a selection of eye- and brain-boggling illusions, and explanations of how they work.

Zilin Jiang from Technion — Israel Institute of Technology and Alexandr Polyanskii from the Moscow Institute of Physics and Technology (MIPT) have proved László Fejes Tóth’s zone conjecture. Formulated in 1973, it says that if a unit sphere is completely covered by several zones, their combined width is at least π. The proof, published in the journal Geometric and Functional Analysis, is important for discrete geometry and enables new problems to be formulated.

For the first time, computers have done better. Researchers sponsored a worldwide competition to develop an algorithm that would identify breast cancer cells on scanned lymph node slides.

Teams that signed up were sent 270 slides, 110 with nodal mets, and 160 without that had been painstakingly hand-labeled to show the computers where the diseased cells were. After learning from that data, the algorithms were then unleashed on 129 brand new unlabeled slides. The winner was the algorithm that got the most slides right.

But let's start with the humans. 11 trained pathologists were given 2 hours to look at the 129 test slides – a workflow that is pretty standard I am told. Of the 49 test slides with metastatic disease, the pathologists found 31 on average. That’s an important false negative rate. One pathologist was allowed to work without time constraints, unrealistic as that is, he or she correctly identified 46 out of 49 slides with cancer and 79 out of 80 without.

AI-driven highlighting of areas highly suspicious for metastatic cancer. 32 machine-learning algorithms competed; the best came from a Harvard-MIT collaboration. The performance of this algorithm on the test images was nearly perfect, identifying cancer and non-cancerous slides with almost 100% accuracy, and highlighting the areas of concern like this.

This is pretty impressive, but there's something really special about this study which has me excited. In most of these image classification tasks, the gold-standard is human perception. Some human expert, or group of them, look at a slide or x-ray or retina image or something and say "yes, this is pulmonary edema". I am always left wondering like – well, ok, but how can we ever beat humans if humans are the gold-standard?

In a discovery that seems straight out of Jurassic Park, researchers have identified a 99-million-year-old fossilized tick on a dinosaur feather. In a significant divergence from the Hollywood storyline, the fossils will not be yielding any dinosaur DNA. The tick’s last meal was not preserved, and even if it had been, the lifespan of DNA is too short for it to be successfully extracted.

However, because they were trapped together, the fossils offer the first direct evidence of ticks feeding on dinosaur blood.The study also examined other ticks trapped in amber, including a previously unknown species that is thought to have also fed on feathered dinosaurs.

In late December 2014, a submarine volcano in the South Pacific Kingdom of Tonga erupted, sending a violent stream of steam, ash and rock into the air. The ash plumes rose as high as 30,000 feet (9 kilometers) into the sky, diverting flights. When the ash finally settled in January 2015, a newborn island with a 400-foot (120-meter) summit nestled between two older islands – visible to satellites in space.

The newly formed Tongan island, unofficially known as Hunga Tonga-Hunga Ha'apai after its neighbors, was initially projected to last a few months. Now it has a 6- to 30-year lease on life, according to a new NASA study.

Hunga Tonga-Hunga Ha'apai is the first island of this type to erupt and persist in the modern satellite era, it gives scientists an unprecedented view from space of its early life and evolution. The new study offers insight into its longevity and the erosion that shapes new islands. Understanding these processes could also provide insights into similar features in other parts of the solar system, including Mars.

A new optical illusion has been discovered, and it’s really quite striking. The strange effect is called the ‘curvature blindness’ illusion, and it’s described in a new paper from psychologist Kohske Takahashi of Chukyo University, Japan. Here’s an example of the illusion: A series of wavy horizontal lines are shown. All of the lines have exactly the same curvature.

Reptiles have scales. Birds have feathers. Mammals have hair. How did they all get them?

For a long time scientists thought the spikes, plumage and fur characteristic of these groups originated independently of each other. But a study published Friday suggests that they all evolved from a common ancestor some 320 million years ago.

This ancient reptilian creature — which gave rise to dinosaurs, birds and mammals — is thought to have been covered in scale-like structures. What that creature looked like is not exactly known, but the scales on its skin developed from structures called placodes — tiny bumps of thick tissue found on the surface of developing embryos.

Scientists had previously found placodes on the embryos of birds and mammals, where they develop into feathers and hairs, but had never found the spots on a reptilian embryo before. The apparent lack of placodes in present-day reptiles fueled controversy about how these features first formed.

“People were fighting about the fact that reptiles either lost it, or birds and mammals independently developed them,” said Michel C. Milinkovitch, an evolutionary developmental biologist from the University of Geneva in Switzerland and an author of the new paper. “Now we are lucky enough to put this debate to rest, because we found the placodes in all reptiles: snakes, lizards and crocodiles.”

In their paper, published in the journal Science Advances, Dr. Milinkovitch and his team report the first findings of the anatomical structures in Nile crocodiles, bearded dragon lizards and corn snakes. They concluded that birds, mammals and reptiles all inherited their placodes from the same ancient reptilian ancestor.

A collection of large, complex objects sculpted out of DNA have been unveiled by three separate research groups, expanding the range of nanometre-scale structures that DNA self-assembly can make. Groups led by California Institute of Technology’s (Caltech’s) Lulu Qian, Harvard’s Peng Yin, and Technical University of Munich’s (TU Munich’s) Hendrik Dietz have each developed complementary methods.

Shawn Douglas from University of California, San Francisco, who wasn’t involved in these studies, emphasizes that the largest DNA structures now weigh billions of Daltons, and are a thousand times heavier than the largest were a decade ago. The tubes Dietz’s team constructs can also be up to 1000 nanometers long, ten times as big as the largest DNA structures were previously.

Remarkably, DNA construction is already at least 26 years old, from when New York University’s Ned Seeman published his group’s assembly of cubes from ten DNA strands in 1991. In the years since, researchers have built progressively bigger and more intricate DNA objects, and used them for computational and mechanical functions. One key underlying technology, known as DNA origami, relies on forcing one long scaffold strand of DNA into a desired shape using dozens of other, shorter, staple strands.

Dietz’s team was inspired by viruses, whose outer shells contain just a few types of protein subunit closed into regular shapes. They explored whether DNA origami subunits might do the same, trying different designs and studying their properties using cryo-electron microscopy. They discovered that these first-level subunits needed to form precise shapes and be rigid to successfully self-assemble at second and third levels.

‘The subunit needs to withstand collisions from solution molecules – the faces have certain relative angles and if they fluctuate too much they’ll never form a closed object,’ Dietz explains. They also shouldn’t bind too tightly, because they’ll get stuck in partially formed states. ‘If we have sufficiently weak interactions then subunits can associate but also dissociate. If you have some erroneously stuck subunits they fall off again.’

The TU Munich team’s final designs used V-shaped first-level DNA origami subunits, which could link up into second-level 350 nanometer diameter rings or ‘reactive vertices’. Depending on their shape, the reactive vertices could link up and close onto each other in third level virus-sized cages that were tetrahedral, hexahedral or dodecahedral. The largest weighed 1.2 billion Daltons and contained 220 DNA origami units. Similarly, the rings can link up into third level, 1000 nanometre-long tubes.

While TU Munich’s approach means all assemblies have to be symmetrical, the Caltech team’s multi-level assembly approach creates custom designs. They produce two-dimensional images from a jigsaw of 64 DNA origami tiles, reaching up to 8,704 pixels and 700 micrometres wide. ‘Once we have synthesized each individual tile, we place each one into its own test tube for a total of 64 tubes,’ explains Qian’s grad student Philip Petersen. ‘First, we combine the contents of certain tubes together to get 16 two-by-two squares. Then those are combined in a certain way to get four tubes each with a four-by-four square, and then the final four tubes are combined to create one large, eight-by-eight square composed of 64 tiles. We design the edges of each tile so that we know exactly how they will combine.’

Superconductors carry electricity with perfect efficiency, unlike the inevitable waste inherent in traditional conductors like copper. But that perfection comes at the price of extreme cold—even so-called high-temperature superconductivity (HTS) only emerges well below zero degrees Fahrenheit. Discovering the ever-elusive mechanism behind HTS could revolutionize everything from regional power grids to wind turbines.

Now, a collaboration led by the U.S. Department of Energy's Brookhaven National Laboratory has discovered a surprising breakdown in the electron interactions that may underpin HTS. The scientists found that as superconductivity vanishes at higher temperatures, powerful waves of electrons begin to curiously uncouple and behave independently—like ocean waves splitting and rippling in different directions.

"For the first time, we pinpointed these key electron interactions happening after superconductivity subsides," said first author and Brookhaven Lab research associate Hu Miao. "The portrait is both stranger and more exciting than we expected, and it offers new ways to understand and potentially exploit these remarkable materials."

The new study, published in the journal PNAS, explores the puzzling interplay between two key quantum properties of electrons: spin and charge. "We know charge and spin lock together and form waves in copper-oxides cooled down to superconducting temperatures," said study senior author and Brookhaven Lab physicist Mark Dean. "But we didn't realize that these electron waves persist but seem to uncouple at higher temperatures."

New calculations show that the accretion flows that form after a neutron star collision can eject large amounts of matter that is rich in gold and other heavy elements.

Gold has long been appreciated for its beauty, its rareness, and a number of astonishing physical properties, like the fact that a single coin can be beaten into an area of more than 30 square meters. As much as gold has been searched for on Earth, there has been a long debate about its cosmic origin. But the detection this past summer of both gravitational waves and an electromagnetic flash from a neutron star merger (see 16 October 2017 Viewpoint) implies that heavy elements are forged around the most extreme objects in the Universe: neutron stars and black holes. A new theoretical study by Daniel Siegel and Brian Metzger from Columbia University, New York [1], simulates in detail the postmerger accretion of neutron star matter onto a black hole and confirms earlier, but less sophisticated, studies claiming that such systems are indeed promising production sites for gold and other heavy elements. The results may provide new insights into the recent neutron star merger observations, such as why the electromagnetic flash that accompanied the gravitational waves was so bright.

The heaviest elements are formed through the so-called “r process,” in which a nucleus grows larger by rapidly capturing multiple neutrons. Neutrons are favored over protons, whose positive charge repels them from the positively charged nucleus. After capturing a neutron, the nucleus is not generally stable. Instead, it may transform a neutron into a proton in a beta decay, thereby emitting an electron and an antineutrino. The next neutrons need to be captured on a very short time scale, before the next beta decays set in. This is the defining feature of the rprocess, and its basic workings were already understood in the late 1950s [2].

Although it’s clear what the r process needs—an explosion with lots of neutrons—where this actually happens has been a mystery for decades. The first suspected culprits were massive stars that explode as core-collapse supernovae. Later on, researchers developed an alternative r-process scenario involving mergers of neutron stars in binary systems, but this idea retained an “exotic” aura as such mergers had never been observed before. It is rather obvious that neutron stars would be an ideal place for the r process; after all, they consist predominantly of neutrons. Much less obvious is whether there is any way to eject the matter in the first place. A neutron star has an enormous gravitational pull, with a gravitational binding energy in excess of 100 MeV/nucleon. By comparison, the most energetic nuclear reactions release less than 10 MeV of energy per nucleon. So nuclear reactions would fall far short of liberating any matter from a neutron star surface. To rip a neutron star apart, it takes a merger with another extreme object, either a black hole [3] or another neutron star [4]. Besides being potential heavy element sources, these violent collisions were also predicted [4] to produce short gamma-ray bursts (GRBs), which are brief and enormously bright flashes of gamma rays.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.