Each winter rainwater from the land above made its way through the cave's ceiling and dripped onto the floor. As each layer of the stalagmite formed, oxygen and carbon isotopes within these raindrops were captured and preserved inside the rock. Now, thousands of years later, a team led by Oxford University scientists is using the data locked inside this stalagmite to get a glimpse of the ancient winter climate of Western North America.

The team's results show that in recent prehistory the region has seen rapid shifts between dry and warm and wet and cold periods. The findings hint at the importance of the Pacific Decadal Oscillation – a pattern of climate variability that changes every 50-70 years – to this area.

'We picked Oregon because it's around this latitude where winter storms hit the West coast of North America, it is representative for an area stretching from California to British Columbia,' Vasile Ersek of Oxford University's Department of Earth Sciences, lead author of the report, told me. Water resources in the region are highly dependent on winter rainfall, without the winter rains the land is arid.

'Most other ways of estimating past climate, like tree ring data, only tell us about summers, when plants are growing,' Vasile explains. 'This work gives us a unique insight into winter climate over thousands of years with an unprecedented combination of length, detail and dating accuracy. 'Moreover, because the cave is only around 70 km from the Pacific Ocean, and directly affected by processes occurring over the ocean, it also represents a record of past climate variability in the Eastern Pacific where detailed records of past climate are otherwise very hard to obtain.'

The stalagmite record suggests that there have been important variations in both rainfall and temperature (c.1 degree Celsius) over the last 13,000 years – with the region's climate switching between extreme dry-warm and wet-cold periods within just a few decades. But those hoping that this cave rock might tell us about man's influence on the climate will be disappointed; after bearing witness to so many winters its record-keeping stopped before the industrial age began.

It is interesting how this little cave could record over 13,000 winters in Oregon. There is ways of telling things from trees but that is durring the summer with the Stalagmite you can tell winter. This little area recorded all the little rain drops that ran into there from the winter. It is a great discovery because it tells us the climate changes in the season.

NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.

The instrument collected molecules from 10 km (6.2 miles) above the comet surface, after the initial touchdown, and at the final site. 16 organic compounds were identified, divided into six classes of organic molecules (alcohols, carbonyls, amines, nitriles, amides and isocyanates). Of these, four organic compounds were detected for the first time on a comet (methyl isocyanate, acetone, propionaldehyde and acetamide).

Almost all the compounds detected are potential precursors, products, combinations or by-products of each other, which provides a glimpse of the chemical processes at work in a cometary nucleus, and even in the collapsing Solar Nebula in the very early Solar System.

COSAC identified a large number of nitrogen compounds but no sulfur compounds, contrary to what the ROSINA instrument on board Rosetta had observed. This suggests that the chemical composition varies depending on the area sampled.

A special issue of the journal Science highlights seven new studies that delve into the data that has been collected by ESA’s probe Philae on 67P/Churyumov-Gerasimenko.

A vaccine against Ebola has been shown to be 100% successful in trials conducted during the outbreak in Guinea and is likely to bring the west African epidemic to an end, experts say. The results of the trials involving 4,000 people are remarkable because of the unprecedented speed with which the development of the vaccine and the testing were carried out.

Scientists, doctors, donors and drug companies collaborated to race the vaccine through a process that usually takes more than a decade in just 12 months.

“Having seen the devastating effects of Ebola on communities and even whole countries with my own eyes, I am very encouraged by today’s news,” said Børge Brende, the foreign minister of Norway, which helped fund the trial.

A new technique for finding and characterizing microbes has boosted the number of known bacteria by almost 50 percent, revealing a hidden world all around us.

A team of microbiologists based at the University of California, Berkeley, recently figured out one such new way of detecting life. At a stroke, their work expanded the number of known types — or phyla — of bacteria by nearly 50 percent, a dramatic change that indicates just how many forms of life on earth have escaped our notice so far.

“Some of the branches in the tree of life had been noted before,” said Chris Brown, a student in the lab of Jill Banfield and lead author of the paper. “With this study we were able to fill in many gaps.”

As an organizational tool, the tree of life has been around for a long time. Lamarck had his version. Darwin had another. The basic structure of the current tree goes back 40 years to the microbiologist Carl Woese, who divided life into three domains: eukaryotes, which include all plants and animals; bacteria; and archaea, single-celled microorganisms with their own distinct features. After a point, discovery came to hinge on finding new ways of searching. “We used to think there were just plants and animals,” said Edward Rubin, director of the U.S. Department of Energy’s Joint Genome Institute. “Then we got microscopes, and got microbes. Then we got small levels of DNA sequencing.”

DNA sequencing is at the heart of this current study, though the researchers’ success also owes a debt to more basic technology. The team gathered water samples from a research site on the Colorado River near the town of Rifle, Colo. Before doing any sequencing, they passed the water through a pair of increasingly fine filters — with pores 0.2 and 0.1 microns wide — and then analyzed the cells captured by the filters. At this point they already had undiscovered life on their hands, for the simple reason that scientists had not thought to look on such a tiny scale before. “Most people assumed that bacteria were bigger, and most bacteria are bigger,” Rubin said. “Banfield has shown that there are whole populations that are very small.”

The researchers extracted the DNA from the cellular material and sent it to the Joint Genome Institute for sequencing. What they got back was a mess. Imagine being handed a box of pieces from thousands of different jigsaw puzzles and having to assemble them without knowing what any of the final images look like. That’s the challenge researchers face when performing metagenomic analysis — sequencing scrambled genetic material from many organisms at once.

The Berkeley team began the reassembly process with algorithms that assembled bits of the sequenced genetic code into slightly longer strings called contigs. “You no longer have tiny pieces of DNA, you have bigger pieces,” Brown said. “Then you figure out which of these larger pieces are part of a single genome.”

This part of the process, in which contigs are combined to reconstruct the genome sequence, is called genome binning. To execute it, the researchers relied on another set of algorithms, customized for the task by Itai Sharon, a co-author of the study. They also assembled some of the genomes manually, making decisions about what goes where based on the fact that some characteristics are consistent for a given genome. For example, the percentage of Gs and Cs will be similar on any part of an organism’s DNA.

When the assembly was complete, the researchers had eight full bacterial genomes and 789 draft genomes that were roughly 90 percent complete. Some of the organisms had been glimpsed before; many others were completely new.

By hijacking the cellular machinery that makes proteins, bioengineers have developed a tool that could allow them to better understand protein synthesis, explore how antibiotics work and convert cells into custom chemical factories.

All life owes its existence to the ribosome, a huge, hardworking molecular machine that reads RNA templates transcribed from DNA, and uses the information to string together amino acids into proteins. A cell requires functioning ribosomes to survive — but they are difficult to engineer. If the engineered molecules deviate too far from the standard design, the cell will die.

“An engineered ribosome learns to do better what you want, but it starts to forget how to do its normal job,” says biochemist Alexander Mankin of the University of Illinois in Chicago.

Mankin teamed up with biochemical engineer Michael Jewett of Northwestern University in Evanston, Illinois, and others to create a ribosome that engineers could tinker with. The results of their handiwork are published in Nature1.

Ribosomes are conglomerates of RNA and protein, hundreds of times larger than typical enzymes. RNA is thought to be responsible for the bulk of a ribosome’s work, which is is considerable — it produces protein at a rate of up to 20 amino acids a second with a remarkably low error rate. “The ribosome deserves all possible respect,” says Mankin.

It is these properties that draw the attention of bioengineers such as Jewett. These researchers would like to create ribosomes that could do other chemical reactions and spit out novel polymers, or incorporate unnatural amino acids into proteins that could be used as drugs.

Each ribosome contains two clumps of snarled RNA molecules, a small subunit and a large one. The subunits come together to translate a messenger RNA sequence into protein, and then separate. They assemble again when it is time to make another protein, although not necessarily with the same partners. “In a way they are very promiscuous,” says Mankin.

That promiscuity hindered efforts to engineer ribosomes to incorporate unnatural amino acids or other compounds. Engineered and natural subunits mixed and matched, reducing the cell's ability to produce normal proteins. The solution, Mankin and Jewett's team decided, was to marry together two engineered subunits. It was unclear whether the approach would work: it was thought that ribosomes exist in two distinct units because it is necessary for their function.

The researchers used a strand of RNA to tether the large and the small subunit together, toiling for months to get the length and location of the link just right so that the machine could still function. “We certainly came close, several times, to saying ‘OK, biology wins',” says Jewett. The team screened its tethered ribosomes in Escherichia coli cells that lacked functioning RNA, and eventually found engineered ribosomes that worked well enough to support some growth, albeit slow. They then tested their platform to confirm that a tethered ribosome could operate side-by-side with natural ribosomes.

The result unlocks a molecular playground for bioengineers: by tethering the artificial subunits together, they can tweak the engineered machines to their liking without halting cell growth, says Joseph Puglisi, a structural biologist at Stanford University in California. Puglisi hopes to harness the system to study how the ribosome functions. James Collins, a bioengineer at the Massachusetts Institute of Technology in Cambridge, says that his lab may use the system to study antibiotics — many of which work by binding to bacterial ribosomes.

Two scientists at a German university have developed a tool which recognizes a person's face in complete darkness. The technology identifies a person from their thermal signature and matches infrared images with ordinary photos. It uses a deep neural network system to process the pictures and recognise people in bad light or darkness.

However, the technology is not being used commercially yet, with one of its creators, Dr Saquib Sarfraz, saying: "There are no plans to roll it out." Dr Sarfraz, who worked on the project with colleague Dr Rainer Stiefelhagen at the Karlsruhe Institute of Technology, told the BBC: "We have been doing research on face recognition already for several years and have a scientific interest in the problem.

"Our presented work on face recognition in thermal images is currently not used outside the research lab." In tests, the technology had an 80% success rate, and worked 55% of the time with one image, and Dr Sarfraz said that "more training data and a more powerful architecture" could produce better results.

With a higher success rate, the tool could potentially be used by police to catch and identify criminals.

It’s no secret that corporate America has declared a war on death. Fueled by the collective fears of 76 million baby boomers, heavyweights like Google and Synthetic Genomics have waded in to the life extension business, bringing with them millions of dollars in funding. The result has been an uptick in the number of discoveries made in gerontology – the study of aging. But despite swamping the issue with money and media attention –an actual cure to aging remains elusive. That may soon change.

Last week, a discovery published by scientists at Northwestern University detailed a new genetic switch that may prove to be a watershed in the fight against aging. It also sheds light on one of the most significant controversies in longevity research – whether aging is the result of numerous bodily systems independently breaking down, or is controlled by a single genetic pathway.

Needless to say, much rests on the result of this question. If aging is a result of multiple independent processes, than the problem is something of a Medusa’s head, where each source of decrepitude must be tackled individually. If on the other hand, aging has a single genetic source, one could hypothetically throw the switch and cure aging in one swoop.

Unfortunately, in biological systems the more complex answers tend to be the right ones. This is why perhaps many scientists were reluctant to believe there could be a single genetic pathway controlling the aging process. However, in what might turn out to be a stroke of luck, there does indeed seem to be a single switch responsible for the aging process — at least in the C. elegans worms on which the research at Northwestern was conducted. Fortunately for us, humans possess the same genetic pathway as the worms, so there is reason to believe the research will apply to homo-sapiens and many other animals as well.

So what exactly is the genetic switch that Dr. Morimoto and his colleagues at Northwester discovered? The story begins eight hours into the life of the C. elegans worm, when their stress protective mechanisms suddenly go into decline. After the first telltale indicators of cellular stress begin occurring, the worm’s body rapidly deteriorates and in a number of weeks the creature is dead.

The researchers traced the decline to the gamete cells within the worm, and from there to a particular genetic pathway that is initialized when the worm reaches sexual maturity. Their research indicates that at the very time the worm reaches sexual maturity and starts creating gamete cells, it begins sending a signal to other cell tissues to turn off protective mechanisms, thereby setting into motion the aging process. Now that the exact pathway has been discovered, scientists will begin working on ways to foil that process and block the signal that causes the decline in cellular resilience.

Many ancient eastern traditions such as the yogic system in India and Taoists of ancient China also connected longevity with gamete cells. In Vedic mythology, the God possessing the knowledge of the Sanjivani mantra capable of bestowing immortality is named Sukracharya, which literally translates as “semen teacher.” While it remains unclear whether Morimoto and his colleagues have discovered the fabled Sanjivani, one thing is sure: they will not be the last to go looking for it. And with the deep pockets of Google and Big Pharma backing this quest, it’s increasingly likely that results will be forthcoming.

Recently acquired images of Tethys, one of the ice moons of Saturn, have given scientists their best view yet of several “unusual, arc-shaped reddish streaks” that sweep across the satellite’s surface.

Images taken using clear, green, infrared and ultraviolet spectral filters were combined to create the enhanced-color views, which highlight subtle color differences across the icy moon’s surface at wavelengths not visible to human eyes.

A few of the red arcs can be seen faintly in observations made earlier in the Cassini mission, which has been in orbit at Saturn since 2004. But the color images for this observation, obtained in April 2015, are the first to show large northern areas of Tethys under the illumination and viewing conditions necessary to see the arcs clearly. As the Saturn system moved into its northern hemisphere summer over the past few years, northern latitudes have become increasingly well illuminated. As a result, the arcs have become clearly visible for the first time.

“The red arcs really popped out when we saw the new images,” said Cassini participating scientist Paul Schenk of the Lunar and Planetary Institute in Houston. “It’s surprising how extensive these features are.”

The origin of the features and their reddish color is a mystery to Cassini scientists. Possibilities being studied include ideas that the reddish material is exposed ice with chemical impurities, or the result of outgassing from inside Tethys. They could also be associated with features like fractures that are below the resolution of the available images.

Except for a few small craters on Saturn’s moon Dione, reddish-tinted features are rare on other moons of Saturn. Many reddish features do occur, however, on the geologically young surface of Jupiter’s moon Europa.

“The red arcs must be geologically young because they cut across older features like impact craters, but we don’t know their age in years.” said Paul Helfenstein, a Cassini imaging scientist at Cornell University, Ithaca, New York, who helped plan the observations. “If the stain is only a thin, colored veneer on the icy soil, exposure to the space environment at Tethys’ surface might erase them on relatively short time scales.”

Dr. Salinas himself has a rare medical condition, one that stands in marked contrast to his patients’: While Josh appeared unresponsive even to his own sensations, Salinas is peculiarly attuned to the sensations of others. If he sees someone slapped across the cheek, Salinas feels a hint of the slap against his own cheek. A pinch on a stranger’s right arm might become a tickle on his own. “If a person is touched, I feel it, and then I recognize that it’s touch,” Salinas says.

The condition is called mirror-touch synesthesia, and it has aroused significant interest among neuroscientists in recent years because it appears to be an extreme form of a basic human trait. In all of us, mirror neurons in the premotor cortex and other areas of the brain activate when we watch someone else’s behaviors and actions. Our brains map the regions of the body where we see someone else caressed, jabbed, or whacked, and they mimic just a shade of that feeling on the same spots on our own bodies. For mirror-touch synesthetes like Salinas, that mental simulacrum is so strong that it crosses a threshold into near-tactile sensation, sometimes indistinguishable from one’s own. Neuroscientists regard the condition as a state of “heightened empathic ability.”

This might sound like a superpower of sorts, a mystical connection between one person’s subjective experience and another’s. But to be clear, Salinas cannot read minds. He doesn’t know whether Josh felt the impact of the reflex hammer, and the tingling in his kneecap says more about his own extraordinary nervous system than it does about that of his patient. What’s more, for those who experience mirror-touch synesthesia—an estimated 1.6 percent of the general population—the condition is often more debilitating than it is empowering.

All antimalarial drugs produced to date target the disease-causing parasite, but a new study in the Journal of Experimental Medicine shows that drugs which target host proteins are also a potential avenue for new interventions.

This study targets a protein that the most deadly malaria parasite, Plasmodium falciparum, relies on to invade human red blood cells. Targeting this human protein blocks an essential interaction, and can wipe out an established malaria infection in mice in less than three days.

Targeting host factors may help researchers overcome one of the biggest challenges to malaria control: drug resistance. Drug resistance arises due to genetic changes in the rapidly-evolving Plasmodium falciparum parasite, which, in Southeast Asia, has rendered one of the current front-line antimalarials, artemisinin, largely ineffective. Researchers are battling to find a solution before the resistant strains spread to other malaria endemic areas, including Africa, a region that accounts for 90 per cent of malaria deaths worldwide. By targeting host factors, rather than the parasite factors, the researchers believe that parasites are far less likely to develop resistance to the new drug.

"This counter-intuitive approach to malaria treatment leaves the parasite powerless," explains Dr. Zenon Zenonos, a first author from the Wellcome Trust Sanger Institute. "If the parasite can't bind to the surface of our red blood cells and invade, it can't reach the next stage in its lifecycle, so it dies. There's nothing the parasite can do to get round it, as the interaction is absolutely essential for infection to occur."

PfRH5, a protein required by the malaria parasite, needs to bind to basigin, a protein that is displayed on the outer surface of human red blood cells, for the cell to become infected. Blockade of the PfRH5-basigin interaction renders the parasite unable to enter red blood cells, and therefore the infection is wiped out.

"When we discovered the PfRH5-basigin interaction in 2011, we knew we had found a chink in the malaria parasite's armour, the question was how to exploit it," says Dr Gavin Wright, corresponding author from the Wellcome Trust Sanger Institute. "Using PfRH5 in a vaccine is one approach, but we were also interested to see if we could disrupt the interaction in the opposite direction rather than by conventionally targeting the parasite. This has significant advantages in preventing the ability of the parasite to develop resistance."

To study the likely human response to therapy, the antibody targeting basigin described in this study was tested in humanised mice that have had the majority of their immune cells and blood cells replaced with those from their human counterparts. In the mice, levels of infection fell to essentially undetectable levels within 72 hours of being treated with low doses of the antibody targeting basigin. Importantly, no side toxic effects were observed in the mouse models that were treated with the antibody in these experiments.

By now the thought of a 3D printed structure like a home or an apartment building doesn’t surprise most of us. After all, we know for a fact that several ambitious projects to construct such structures are currently underway. With that said, the majority of these buildings only utilize 3D printing for their exterior walls, as sort of a replacement for the use of concrete block or wood framing.

Recently, however, the United Arab Emirates National Innovation Committee has revealed a project which will take things a step or two further. The committee, as well as Shaikh Mohammad Bin Rashid Al Maktoum, UAE Vice-President and Prime Minister and Ruler of Dubai, wants to transform the UAE into a technological center of the world when it comes to architecture and design, and has set forth a plan to 3D print an entire office building. Not only will the exterior walls be printed, but so too will the interior walls, and furniture.

To print the 2,000 square foot building, engineers will use a 20-foot tall 3D printer, which will be assembled on the build site, located at a busy intersection right in the heart of Dubai. They will use Special Reinforced Concrete (SRC), Fiber Reinforced Plastic (FRP), and Glass Fiber Reinforced Gypsum (GRG) to fabricate the various structural and decorative components of the structure. Total construction time will be just a few weeks, while labor costs will be reduced by 50-80% and construction waste will be reduced between 30-60%.

Once completed, the building, will be used for a variety of purposes and will feature its very own 3D printing exhibition inside. This is the first major project undertaken by the ‘Museum of the Future‘, a museum that began construction earlier in the year, with the promise that 3D printing will be utilized in its creation.

Can the flap of a butterfly's wings in Brazil set off a tornado in Texas? This intriguing hypothetical scenario, commonly called "the butterfly effect," has come to embody the popular conception of a chaotic system, in which a small difference in initial conditions will cascade toward a vastly different outcome in the future.

Understanding and modeling chaos can help address a variety of scientific and engineering questions, and so researchers have worked to develop better mathematical definitions of chaos. These definitions, in turn, will aid the construction of models that more accurately represent real-world chaotic systems.

Now, researchers from the University of Maryland have described a new definition of chaos that applies more broadly than previous definitions. This new definition is compact, can be easily approximated by numerical methods and works for a wide variety of chaotic systems. The discovery could one day help advance computer modeling across a wide variety of disciplines, from medicine to meteorology and beyond. The researchers present their new definition in the July 28, 2015 issue of the journal Chaos.

"Our definition of chaos identifies chaotic behavior even when it lurks in the dark corners of a model," said Brian Hunt, a professor of mathematics with a joint appointment in the Institute for Physical Science and Technology (IPST) at UMD. Hunt co-authored the paper with Edward Ott, a Distinguished University Professor of Physics and Electrical and Computer Engineering with a joint appointment in the Institute for Research in Electronics and Applied Physics (IREAP) at UMD.

The study of chaos is relatively young. MIT meteorologist Edward Lorenz, whose work gave rise to the term "the butterfly effect," first noticed chaotic characteristics in weather models in the mid-20th century. In 1963, he published a set of differential equations to describe atmospheric airflow and noted that tiny variations in initial conditions could drastically alter the solution to the equations over time, making it difficult to predict the weather in the long term.

Mathematically, extreme sensitivity to initial conditions can be represented by a quantity called a Lyapunov exponent. This number is positive if two infinitesimally close starting points diverge exponentially as time progresses. Yet, Lyapunov exponents have limitations as a definition of chaos: they only test for chaos in particular solutions of a model, not in the model itself, and they can be positive even when the underlying model is considered too straightforward to be deemed chaotic.

University of California, Berkeley, researchers have discovered a much cheaper and easier way to target a hot new gene editing tool, CRISPR-Cas9, to cut or label DNA. The CRISPR-Cas9 technique, invented three years ago at UC Berkeley, has taken genomics by storm, with its ability to latch on to a very specific sequence of DNA and cut it, inactivating genes with ease. This has great promise for targeted gene therapy to cure genetic diseases, and for discovering the causes of disease.

The technology can also be tweaked to latch on without cutting, labeling DNA with a fluorescent probe that allows researchers to locate and track a gene among thousands in the nucleus of a living, dividing cell. The newly developed technique now makes it easier to create the RNA guides that allow CRISPR-Cas9 to target DNA so precisely. In fact, for less than $100 in supplies, anyone can make tens of thousands of such precisely guided probes covering an organism’s entire genome. The process, which they refer to as CRISPR-EATING – for “Everything Available Turned Into New Guides” – is reported in a paper to appear in the August 10 issue of the journal Developmental Cell.

As proof of principle, the researchers turned the entire genome of the common gut bacterium E. coli into a library of 40,000 RNA guides that covered 88 percent of the bacterial genome. Each RNA guide is a segment of 20 RNA base pairs: the template used by CRISPR-Cas9 as it seeks out complementary DNA to bind and cut.

These libraries can be employed in traditional CRISPR-Cas9 editing to target any specific DNA sequence in the genome and cut it, which is what researchers do to pin down the function of a gene: knock it out and see what bad things happen in the cell. This can help pinpoint the cause of a disease, for example. The process is called genetic screening and is done in batches: each of the thousands of probes is introduced into a single cell on a plate filled with hundreds of thousands of cells.

“We can make these libraries for a lot less money, which makes genetic screening potentially accessible in organisms less well studied,” such as those that have not yet had their genomes sequenced, said first author Andrew Lane, a UC Berkeley post-doctoral fellow.

But Lane and colleague Rebecca Heald, UC Berkeley professor of molecular and cell biology, developed the technology in order to track chromosomes in real-time in living cells, in particular during cell division, a process known as mitosis. This is part of a larger project by Heald to find out what regulates the size of the nucleus and other subcellular components as organisms grow from just a few cells to many cells.

“This technology will allow us to paint a whole chromosome and look at it live and really follow it in the nucleus during the cell cycle or as it goes through developmental transitions, for example in an embryo, to see how it changes in size and structure,” Heald said.

The new technique uses standard PCR (polymerase chain reaction) to generate many short lengths of DNA from whatever segment of DNA a researcher is interested in, up to and including an entire genome.

These fragments are then precisely snipped at a region called a PAM, which is critical to CRISPR binding. Simple restriction enzymes are then used to cut each piece 20 base pairs from the PAM end, generating the exact size of RNA guide that CRISPR uses in searching the genome for complementary sites. These guide RNAs are then easily incorporated into the CRISPR-Cas9 complex, yielding tens of thousands of probes for labeling or cutting DNA.

“By using the genome itself as a source for guide RNAs, their approach puts the creation of libraries that target contiguous regions in reach of almost any lab,” said Jacob Corn, managing and scientific director of the Innovative Genomics Initiative at UC Berkeley. “This could be very useful for genome imaging and certain kinds of screens, and I’m very interested to see how it enables biological discovery using Cas9 tools.”

As before, CRISPR is something that I am interested in studying since it is relatively new in the gene editing world. Also, UC Berkeley is a college that I am considering applying and attending so it would be good to contribute to this research.

Earth's magnetic field is 800 million years older than previously thought, new research suggests.

A new analysis of Western Australian zircon minerals has found the engine that generates the field started not long after the planet formed. Earth's so-called "geodynamo", involving the movement of molten iron in the Earth's outer core, began 4.22 billion years ago, say researchers today in the journal Science.

"This opens a window into a period that we know almost nothing about," says co-author, Professor Francis Nimmo of the University of California, Santa Cruz. "Before this study we knew that the dynamo had existed for around three and a half billion years. What this study has done is push back the age of the dynamo by another 800 million years."

Earth's magnetic field acts as a shield protecting the planet's atmosphere and water, which make life on Earth possible. Without the magnetic field Earth's atmosphere would have been eroded away by the solar wind, a stream of charged particles flowing from the Sun.

The magnetic field was particularly important in Earth's early history when solar winds were about 100 times stronger than they are now.

"The young Sun was very active, and so having a strong magnetic field early on allows you to hang on to your atmosphere," says Nimmo.

"Mars had a dynamo early on, but then that dynamo died," he says. "Part of the reason that Mars lost its atmosphere is not simply that it has less gravity, but also that it didn't have a magnetic field protecting the atmosphere from being blown away."

Docile ants become aggressive guard dogs after a secret signal from their caterpillar overlord. The idea turns on its head the assumption that the two species exchange favours in an even-handed relationship.

The caterpillars of the Japanese oakblue butterfly (Narathura japonica) grow up wrapped inside leaves on oak trees. To protect themselves against predators like spiders and wasps, they attract ant bodyguards, Pristomyrmex punctatus, with an offering of sugar droplets.

The relationships was thought to be a fair exchange of services in which both parties benefit. But Masaru Hojo from Kobe University in Japan noticed something peculiar: the caterpillars were always attended by the same ant individuals.

“It also seemed that the ants never moved away or returned to their nests,” he says. They seemed to abandon searching for food, and were just standing around guarding the caterpillar.

Cells contain an ocean of twisting and turning RNA molecules. Now researchers are working out the structures — and how important they could be.

When Philip Bevilacqua decided to work out the shapes of all the RNA molecules in a living plant cell, he faced two problems. First, he had not studied plant biology since high school. And second, biochemists had tended to examine single RNA molecules; tackling the multitudes that waft around in a cell was a much thornier challenge.

Bevilacqua, an RNA chemist at Pennsylvania State University in University Park, was undeterred. He knew that RNA molecules were vital regulators of cell biology and that their structures might offer broad lessons about how they work. He brushed up on plant anatomy in an undergraduate course and worked with molecular plant biologist Sarah Assmann to develop a technique that could cope with RNAs at scale.

In November 2013, they and their teams became the first to describe the shapes of thousands of RNAs in a living cell — revealing a veritable sculpture garden of different forms in the weedy thale cress, Arabidopsis thaliana1.

One month later, a group at the University of California, San Francisco, reported a comparable study of yeast and human cells2. The number of RNA structures they managed to resolve was “unprecedented”, says Alain Laederach, an RNA biologist at the University of North Carolina at Chapel Hill (UNC).

Scientists' view of RNA has transformed over the past few decades. Once, most RNAs were thought to be relatively uninteresting pieces of limp spaghetti that ferried information between the molecules that mattered, DNA and protein. Now, biologists know that RNAs serve many other essential functions: they help with protein synthesis, control gene activity and modify other RNAs. At least 85% of the human genome is transcribed into RNA, and there is vigorous debate about what, if anything, it does.

But a key mystery has remained: its convoluted structures. Unlike DNA, which forms a predictable double helix, RNA comprises a single strand that folds up into elaborate loops, bulges, pseudo-knots, hammerheads, hairpins and other 3D motifs. These structures flip and twist between different forms, and are thought to be central to the operation of RNA, albeit in ways that are not yet known. “It's a big missing piece of the puzzle of understanding how RNAs work,” says Jonathan Weissman, a biophysicist and leader of the yeast and human RNA study.

In the past few years, researchers have begun to get a toehold on the problem. Bevilacqua, Weissman and others have devised techniques that allow them to take snapshots of RNA configurations en masse inside cells — and found that the molecules often look nothing like what is seen when RNA folds under artificial conditions. The work is helping them to decipher some of the rules that govern RNA structure, which might be useful in understanding human variation and disease — and even in improving agricultural crops.

“It gets at the very basic problem of how do living things evolve and how do these molecular rules affect what we look like and how we function,” says Laederach. “And that, fundamentally as a biologist, is really exciting.” The best-described RNA structures are what Kevin Weeks, a chemical biologist at the UNC, calls “RNA rocks”: molecules that have changed little in their sequence or structure over evolutionary time. These include transfer RNAs and ribosomal RNAs (both involved in protein synthesis) as well as enzymatic RNAs known as ribozymes. “But in the world of RNAs,” Weeks says, “these are probably huge outliers.”

Researchers are trying to program self-driving cars to make split-second decisions that raise real ethical questions.

A philosopher is perhaps the last person you’d expect to have a hand in designing your next car, but that’s exactly what one expert on self-driving vehicles has in mind.

Chris Gerdes, a professor at Stanford University, leads a research lab that is experimenting with sophisticated hardware and software for automated driving. But together with Patrick Lin, a professor of philosophy at Cal Poly, he is also exploring the ethical dilemmas that may arise when vehicle self-driving is deployed in the real world.

Gerdes and Lin organized a workshop at Stanford earlier this year that brought together philosophers and engineers to discuss the issue. They implemented different ethical settings in the software that controls automated vehicles and then tested the code in simulations and even in real vehicles. Such settings might, for example, tell a car to prioritize avoiding humans over avoiding parked vehicles, or not to swerve for squirrels.

Fully self-driving vehicles are still at the research stage, but automated driving technology is rapidly creeping into vehicles. Over the next couple of years, a number of carmakers plan to release vehicles capable of steering, accelerating, and braking for themselves on highways for extended periods. Some cars already feature sensors that can detect pedestrians or cyclists, and warn drivers if it seems they might hit someone.

So far, self-driving cars have been involved in very few accidents. Google’s automated cars have covered nearly a million miles of road with just a few rear-enders, and these vehicles typically deal with uncertain situations by simply stopping (see “Google’s Self-Driving Car Chief Defends Safety Record”).

As the technology advances, however, and cars become capable of interpreting more complex scenes, automated driving systems may need to make split-second decisions that raise real ethical questions.

At a recent industry event, Gerdes gave an example of one such scenario: a child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van.

“As we see this with human eyes, one of these obstacles has a lot more value than the other,” Gerdes said. “What is the car’s responsibility?”

Gerdes pointed out that it might even be ethically preferable to put the passengers of the self-driving car at risk. “If that would avoid the child, if it would save the child’s life, could we injure the occupant of the vehicle? These are very tough decisions that those that design control algorithms for automated vehicles face every day,” he said.

Nearly all life on Earth depends on photosynthesis, the conversion of light energy into chemical energy. Oxygen-producing plants and cyanobacteria perfected this process 2.7 billion years ago. But the first photosynthetic organisms were likely single-celled purple bacteria that began absorbing near-infrared light and converting it to sulfur or sulfates about 3.4 billion years ago.

Found in the bottom of lakes and ponds today, purple bacteria possess simpler photosynthetic organelles—specialized cellular subunits called chromatophores—than plants and algae. For that reason, Klaus Schulten of the University of Illinois at Urbana–Champaign (UIUC) targeted the chromatophore to study photosynthesis at the atomic level.

As a computational biophysicist, Schulten unites biologists' experimental data with the physical laws that govern the behavior of matter. This combination allows him to simulate biomolecules, atom by atom, using supercomputers. The simulations reveal interactions between molecules that are impossible to observe in the laboratory, providing plausible explanations for how molecules carry out biological functions in nature.

In 2014, a team led by Schulten used the Titan supercomputer, located at the US Department of Energy's (DOE's) Oak Ridge National Laboratory, to construct and simulate a single chromatophore. The soccer ball-shaped chromatophore contained more than 100 million atoms—a significantly larger biomolecular system than any previously modeled. The project's scale required Titan, the flagship supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility, to calculate the interaction of millions of atoms in a feasible time frame that would allow for data analysis.

"For years, scientists have seen that cells are made of these machines, but they could only look at part of the machine. It's like looking at a car engine and saying, 'Oh, there's an interesting cable, an interesting screw, an interesting cylinder.' You look at the parts and describe them with love and care, but you don't understand how the engine actually works that way," Schulten said. "Titan gave us the fantastic level of computing we needed to see the whole picture. For the first time, we could go from looking at the cable, the screw, the cylinder to looking at the whole engine."

Schulten's chromatophore simulation is being used to understand the fundamental process of photosynthesis, basic research that could one day lead to better solar energy technology. Of particular interest: how hundreds of proteins work together to capture light energy at an estimated 90 percent efficiency.

Sea spiders belong to a group of arthropods called the pycnogonids, which are found scuttling along the bottom of many of the world’s oceans and seas. They are crustaceans and not spiders. Most are relatively small – it’s only around the poles that sea spiders grow large, which is a trait they share with many marine species. Exactly why this happens remains a mystery.

Many sea spiders are carnivorous, dining on worms, jellyfish and sponges. “They have a giant proboscis to suck up their food,” says Florian Leese at Ruhr University Bochum in Germany. Like true spiders, some sea spiders have eight legs. But not all do. “Some have 10 and even 12 legs,” says Leese.

Curiously, though, their bodies don’t appear to have much else apart from their long legs and proboscis. “They don’t really have a body,” says Leese. “They have their organs in their legs.” These creatures are sometimes called the pantopoda – meaning “all legs” – because of their bizarre anatomy.

The lack of an obvious body means sea spiders don’t need to bother with a respiratory system. Simple diffusion can deliver gases to all of the tissues. The Southern Ocean giant sea spider is one of the most common sea spiders in the waters around Antarctica. It also lives in coastal waters off South America, South Africa and Madagascar, down to a depth of 4.9 kilometers.

It is so widespread that some have wondered whether it really is a single species. To find out, Leese and his colleagues examined DNA taken from 300 specimens. Animal cells usually carry two forms of DNA: most is in the form of nuclear DNA in the cell’s nucleus, but there is a second form of DNA in the mitochondria – often called the “powerhouse of the cell”. Mitochondrial DNA is usually only inherited down the female line.

The mitochondrial genes fell into about 20 distinct groups, apparently suggesting the Southern Ocean giant sea spider should really be broken up into 20 distinct species. But the nuclear DNA showed that many of these apparently distinct species can and have interbred in the recent past. In fact, the team says, if the Southern Ocean giant sea spider is divided into several distinct species, we should probably recognise only five – not 20.

Why is this? The mitochondrial DNA sequences are so distinct that the sea spiders probably began to diverge about a million years ago – perhaps during glacial periods when a deterioration in conditions left small populations of sea spiders isolated from one another in ice-free “refugia”, where they could each develop their own genetic mutations.

But when environmental conditions improved and the spider lineages began expanding out of those refugia, they began to interbreed and hybridise. That’s not unlike the way different human lineages like the Neanderthals, Denisovans and our species interbred when they came into contact after thousands of years of isolation.

The results are important for conservation. Mitochondrial and nuclear DNA often show the same general pattern, says Leese, so when easier-to-analyse mitochondrial DNA indicates one species actually breaks down into several “cryptic” species, conservationists want to protect all of the lineages. But nuclear DNA sequences might show that many of those cryptic species don’t really exist. “The study advises caution in calling distinct mitochondrial lineages species,” says Leese.

NASA's Swift satellite detected a rising tide of high-energy X-rays from the constellation Cygnus on June 15, just before 2:32 p.m. EDT. About 10 minutes later, the Japanese experiment on the International Space Station called the Monitor of All-sky X-ray Image (MAXI) also picked up the flare.

The outburst came from V404 Cygni, a binary system located about 8,000 light-years away that contains a black hole. Every couple of decades the black hole fires up in an outburst of high-energy light, becoming an X-ray nova. Until the Swift detection, it had been slumbering since 1989.

An X-ray nova is a bright, short-lived X-ray source that reaches peak intensity in a few days and then fades out over a period of weeks or months. The outburst occurs when stored gas abruptly rushes toward a neutron star or black hole. By studying the patterns of the X-rays produced, astronomers can determine the kind of object at the heart of the eruption.

"Relative to the lifetime of space observatories, these black hole eruptions are quite rare," said Neil Gehrels, Swift's principal investigator at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "So when we see one of them flare up, we try to throw everything we have at it, monitoring across the spectrum, from radio waves to gamma rays."

Astronomers classify this type of system as a low-mass X-ray binary. In V404 Cygni, a star slightly smaller than the sun orbits a black hole 10 times its mass in only 6.5 days. The close orbit and strong gravity of the black hole produce tidal forces that pull a stream of gas from its partner. The gas travels to a storage disk around the black hole and heats up to millions of degrees, producing a steady stream of X-rays as it falls inward.

But the disk flips between two dramatically different conditions. In its cooler state, the gas resists inward flow and just collects in the outer part of the disk like water behind a dam. Inevitably the build-up of gas overwhelms the dam, and a tsunami of hot bright gas rushes toward the black hole.

Astronomers relish the opportunity to collect simultaneous multiwavelength data on black hole binaries, especially one as close as V404 Cygni. In 1938 and 1956, astronomers caught V404 Cygni undergoing outbursts in visible light. During its eruption in 1989, the system was observed by Ginga, an X-ray satellite operated by Japan, and instruments aboard Russia's Mir space station.

"Right now, V404 Cygni shows exceptional variation at all wavelengths, offering us a rare chance to add to this unique data set," said Eleonora Troja, a Swift team member at Goddard.

Ongoing or planned satellite observations of the outburst involve NASA’s Swift satellite, Chandra X-ray Observatory and Fermi Gamma-ray Space Telescope, as well as Japan’s MAXI, the European Space Agency's INTEGRAL satellite, and the Italian Space Agency's AGILE gamma-ray mission. Ground-based facilities following the eruption include the 10.4-meter Gran Telescopio Canarias operated by Spain in the Canary Islands, the University of Leicester's 0.5-meter telescope in Oadby, U.K., the Nasu radio telescope at Waseda University in Japan, and amateur observatories.

V404 Cygni has flared many times since the eruption began, with activity ranging from minutes to hours. "It repeatedly becomes the brightest object in the X-ray sky -- up to 50 times brighter than the Crab Nebula, which is normally one of the brightest sources," said Erik Kuulkers, the INTEGRAL project scientist at ESA's European Space Astronomy Centre in Madrid. "It is definitely a 'once in a professional lifetime' opportunity."

In a single week, flares from V404 Cygni generated more than 70 "triggers" of the Gamma-ray Burst Monitor (GBM) aboard Fermi. This is more than five times the number of triggers seen from all objects in the sky in a typical week. The GBM triggers when it detects a gamma-ray flare, then it sends numerous emails containing increasingly refined information about the event to scientists on duty.

Traditional cloning and sequencing methods can be laborious, expensive, and time-consuming techniques, especially when applied to large sample numbers. Even for the routine cloning of small sample sizes, however, many research laboratories have yet to discover the power, ease, and efficiency of the Gibson Assembly® method. First described by Dan Gibson at the J. Craig Venter Institute (JCVI) in 2009, the Gibson Assembly method is a sequence-independent, seamless cloning method that offers many advantages over traditional cloning, most notably the ability to assemble multiple DNA fragments quickly, accurately, and efficiently in a single-tube reaction.

SGI-DNA, a Synthetic Genomics company, offers Gibson Assembly reagent kits: the Gibson Assembly HiFi 1-Step Kit can be used for the simultaneous assembly of up to 5 fragments and the Gibson Assembly Ultra Kit can be used for the simultaneous assembly of up to 15 fragments. The Gibson Assembly method can be leveraged for a variety of applications, including routine cloning, site-directed mutagenesis, and whole-genome synthesis.

The sequencing and cloning of n constructs (where n = the number of constructs) requires individually manipulating all n samples through the 10 workflow steps outlined on the previous page in n tubes (i.e., cloning 12 constructs requires handling 12 samples during every workflow stage). See Figure 1A for a schematic overview.

Combining Gibson Assembly shotgun cloning with next-generation sequencing is achieved by processing samples through the following steps:

In the Gibson Assembly next-generation shotgun cloning workflow, samples are pooled prior to gel purification and processed in size-correlated batches. Therefore, to sequence and clone n samples using Gibson Assembly shotgun cloning, n constructs will be individually PCR-amplified. Following amplification, however, samples are pooled. For convenience and processing using 96-well plates, pools are typically batched with 8 samples per batch. Because of batching, for the remaining workflow steps, instead of processing n samples, only n/8 sample batches are manipulated for each step, which translates into substantial reagent savings (see Figures 1B & 2).

Elon Musk and Stephen Hawking are among the leaders from the science and technology worlds calling for a ban on autonomous weapons, warning that weapons with a mind of their own "would not be beneficial for humanity."

Along with 1,000 other signatories, Musk and Hawking signed their names to an open letter that will be presented this week at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.

Autonomous weapons are defined by the group as artillery that can "search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions."

"Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is -- practically if not legally -- feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms," the letter, posted on the Future of Life Institute's website says.

If one country pushes ahead with the creation of robotic killers, the group wrote it fears it will spur a global arms race that could spell disaster for humanity.

"Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group," the letter says. "We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people."

While the group warns of the potential carnage killer robots could inflict, they also stress they aren't against certain advances in artificial intelligence.

"We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so," the letter says. "Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."

Scientists and engineers at Arizona State University, in Tempe, have created the first lasers that can shine light over the full spectrum of visible colors. The device’s inventors suggest the laser could find use in video displays, solid-state lighting, and a laser-based version of Wi-Fi.

Although previous research has created red, blue, green and other lasers, each of these lasers usually only emitted one color of light. Creating a monolithic structure capable of emitting red, green, and blue all at once has proven difficult because it requires combining very different semiconductors. Growing such mismatched crystals right next to each other often results in fatal defects throughout each of these materials.

But now scientists say they’ve overcome that problem. The heart of the new device is a sheet only nanometers thick made of a semiconducting alloy of zinc, cadmium, sulfur, and selenium. The sheet is divided into different segments. When excited with a pulse of light, the segments rich in cadmium and selenium gave off red light; those rich in cadmium and sulfur emitted green light; and those rich in zinc and sulfur glowed blue.

The researchers grew this alloy in stages, carefully varying the temperature and other growth conditions over time. By controlling the interplay between the vapor, liquid, and solid phases of the different materials that made up this nano-sheet, they ensured that these different crystals could coexist.

The scientists can individually target each segment of the nano-sheet with a light pulse. Varying the power of the light pulses that each section received tuned how intensely they shone, allowing the alser to produce 70 percent more perceptible colors than the most commonly used light sources.

Lasers could be far more energy-efficient than LEDs: While LED-based lighting produces up to about 150 lumens per watt of electricity, lasers could produce more than 400 lumens per watt, says Cun-Zheng Ning, a physicist and electrical engineer at Arizona State University at Tempe who worked on the laser. In addition, he says that white lasers could also lead to video displays with more vivid colors and higher contrast than conventional displays.

Another important potential application could be "Li-Fi", the use of light to connect devices to the Interenet. Li-Fi ould be 10 times faster than today’s Wi-Fi, but "the Li-Fi currently under development is based on LEDs," Ning says. He suggests white-laser based Li-Fi could be 10 to 100 times faster than LED-based Li-Fi, because the lasers can encode data much faster than white LEDs.

In the future, the scientists plan to explore whether they can excite these lasers with electricity instead of with light pulses. They detailed their findingsonline 27 July in the journal Nature Nanotechnology.

Currently, all light-emitting diodes (LEDs) emit light of only one color, which is predefined during fabrication. So far, tuning the color of light produced by a single LED has never been realized, despite numerous attempts.

So it's quite remarkable that in a new study, scientists have demonstrated an LED that not only can be tuned to emit different colors of light, but can do so across nearly the entire visible spectrum: from blue (450-nm wavelength) to red (750-nm wavelength)—basically all colors but the darkest blues and violets.

The key to achieving the color-tunable LED is making it out of graphene—the same material that has led to groundbreaking research in a number of areas, from batteries to solar cells to semiconductors. Despite graphene's success in these areas, graphene-based LEDs have never been realized before now, making the new device the first-ever graphene-based LED in addition to being the first color-tunable LED.

Applications of the new LED include high-quality, color-tunable LED displays for TVs and mobile devices, color-tunable LED light fixtures, and the potential for a variety of future graphene-based photonic devices.

The researchers, led by Professor Tian-Ling Ren at Tsinghua University in Beijing, made the light-emitting material from the interface of two different forms of graphene. These forms are graphene oxide (GO), which is produced from inexpensive graphite, and reduced graphene oxide (rGO), which is a more pristine form of GO.

Lying at the interface of the GO and rGO is a special type of partially reduced GO that has optical, physical, and chemical properties that lie somewhere in between those of GO and rGO. The most important "blended" property of the interfacial layer is that it has a series of discrete energy levels, which ultimately allows for the emission of light at many different energies, or colors.

The occurrence of this property is especially interesting because, on their own, neither GO nor rGO (or any other known form of graphene, for that matter) can emit any light at all. This is because neither material has the right size "bandgap," which is the gap between two energy bands that electrons must jump across to conduct electricity or emit light. While GO has an extremely large bandgap, rGO has a zero bandgap.

Instead of having a bandgap somewhere in between GO and rGO, the partially reduced interfacial GO actually has many different intermediate bandgaps as a result of how the blending occurs—not as a smooth transition, but in the form of rGO nanoclusters embedded within the GO layer. Because these rGO nanoclusters are reduced to varying degrees at the interface, they exhibit variations in their energy levels and, consequently, in the color of emitted light. These energy levels can be easily modulated by changing the applied voltage or by chemical doping, which selectively stimulates a single color of luminescence and enables tuning of the LED's color.

"We found that a combination of GO and rGO can create a conductive and wide bandgap material," Ren told Phys.org. "It is commonly known that graphene does not have a bandgap. Therefore we were all surprised that our GO/rGO interface (a graphene-based system) can actually be luminescent."

Researchers have been able to develop a light-emitting device that is able to turn on and off as many as 90 billion times per second. The device could be a way to greatly speed up data transmission in computers.

Things like smartphone batteries currently power transistors by flipping electronics on and off billions of times per second. However, if microchips were able to use photons instead of electrons, computers might be able to operate a lot faster. To do this, however, engineers first had to create a light source that could be switched on and off extremely fast. While a laser might be able to do this, lasers are too power-hungry.

Researchers at Duke University, however, are getting closer to creating this kind of a light source. A team from the Pratt School of Engineering was able to push semiconductor quantum dots to emit light at over 90 gigahertz.

"This is something that the scientific community has wanted to do for a long time," said Duke assistant professor of electrical computer engineering, Maiken Mikkelsen, in an interview. "We can now start to think about making fast-switching devices based on this research, so there's a lot of excitement about this demonstration."

The new device was created using a laser that shines on a silver cube, after which the free electrons on the surface of the cube oscillate together in a wave. The oscillations create light themselves, which again reacts with the free electrons on the cube. This energy is called a plasmon.

By placing a sheet of gold only 20 atoms away, an energy field is created between the gold and silver cube. This field then interacts with quantum dots that are sandwiched between the gold and the silver cube, with the quantum dots then producing an emission of photons that can be turned on and off at more than 90 billion times per second.

"The eventual goal is to integrate our technology into a device that can be excited either optically or electrically," said Thang Hoang, another researcher at the laboratory. "That's something that I think everyone, including funding agencies, is pushing pretty hard for."

The team is now working to create one single photon source by only having one quantum dot between the silver cube and the gold sheet. The team is also trying to find the optimum placement and orientation of the quantum dots to create the fastest rate possible.

Argonne scientists used Mira to identify and improve a new mechanism for eliminating friction, which fed into the development of a hybrid material that exhibited superlubricity at the macroscale for the first time. ALCF researchers helped enable the groundbreaking simulations by overcoming a performance bottleneck that doubled the speed of the team’s code.

While reviewing the simulation results of a promising new lubricant material, Argonne researcher Sanket Deshmukh stumbled upon a phenomenon that had never been observed before.

“I remember Sanket calling me and saying ‘you have got to come over here and see this. I want to show you something really cool,’” said Subramanian Sankaranarayanan, Argonne computational nanoscientist, who led the simulation work at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

They were amazed by what the computer simulations revealed. When the lubricant materials—graphene and diamond-like carbon (DLC)—slid against each other, the graphene began rolling up to form hollow cylindrical “scrolls” that helped to practically eliminate friction. These so-called nanoscrolls represented a completely new mechanism for superlubricity, a state in which friction essentially disappears.

“The nanoscrolls combat friction in very much the same way that ball bearings do by creating separation between surfaces,” said Deshmukh, who finished his postdoctoral appointment at Argonne in January.

Superlubricity is a highly desirable property. Considering that nearly one-third of every fuel tank is spent overcoming friction in automobiles, a material that can achieve superlubricity would greatly benefit industry and consumers alike. Such materials could also help increase the lifetime of countless mechanical components that wear down due to incessant friction.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.