A Vanderbilt University brain-mapping study has found that people’s ability to make decisions in novel situations decreases with age and is associated with a reduction in the integrity of two specific white-matter pathways.

The pathways connect an area in the cerebral cortex called the medial prefrontal cortex (involved with decision making) with two other areas deeper in the brain: the thalamus (a highly connected relay center in the brain) and ventral striatum (associated with the emotional and motivational aspects of behavior).

NOTE: All articles in the amazing-science newsletter can also be sorted by topic. To do so, click the FIND buntton (symbolized by the FUNNEL on the top right of the screen) and display all the relevant postings SORTED by TOPICS.

The AI boom offers Chinese chipmakers a chance to catch up after years of lagging behind.

In an office at Tsinghua University in Beijing, a computer chip is crunching data from a nearby camera, looking for faces stored in a database. Seconds later, the same chip, called Thinker, is handling voice commands in Chinese. Thinker is designed to support neural networks. But what’s special is how little energy it uses—just eight AA batteries are enough to power it for a year.

Thinker can dynamically tailor its computing and memory requirements to meet the needs of the software being run. This is important since many real-world AI applications—recognizing objects in images or understanding human speech—require a combination of different kinds of neural networks with different numbers of layers.

In December 2017, a paper describing Thinker’s design was published in the IEEE Journal of Solid-State Circuits, a top journal in computer hardware design. For the Chinese research community, it was a crowning achievement. The chip is just one example of an important trend sweeping China’s tech sector. The country’s semiconductor industry sees a unique opportunity to establish itself amid the current wave of enthusiasm for hardware optimized for AI. Computer chips are key to the success of AI, so China needs to develop its own hardware industry to become a real force in the technology (see “China’s AI Awakening”).

China has been working to improve its technology industry to enter the race for AI, competing against companies such as Google and Intel. The goal is to develop a chip, named Thinker, which will add AI to any device. However, due to China's industry lacking far behind other countries, such as the U.S., China has been increasing its import of integrated circuits and has recorded a near 13% increase of imports since last year. In December of 2017, China's Ministry of Industry and Information Technology released a paper describing their 3 year plan to be able to mass-produce the Thinker chips by the year 2020.

I believe that the concept China is going for will be huge step forward in our ever-expanding technology industry. It will allow all electronic devices, such as computer and phones, to have AI capabilities similar to Apple's SIRI software and Samsung's Bixby software. However. I also believe that it will only add to the debate about if mankind is becoming more dependent on technology. Integrating AI software into all of our devices will minimize the actions we will have to do to power and work devices and make those devices more independent.

In the summer of 1935, the physicists Albert Einstein and Erwin Schrödinger engaged in a rich, multifaceted and sometimes fretful correspondence about the implications of the new theory of quantum mechanics. The focus of their worry was what Schrödinger later dubbed entanglement: the inability to describe two quantum systems or particles independently, after they have interacted.

Until his death, Einstein remained convinced that entanglement showed how quantum mechanics was incomplete. Schrödinger thought that entanglement was the defining feature of the new physics, but this didn’t mean that he accepted it lightly. ‘I know of course how the hocus pocus works mathematically,’ he wrote to Einstein on 13 July 1935. ‘But I do not like such a theory.’

Schrödinger’s famous cat, suspended between life and death, first appeared in these letters, a byproduct of the struggle to articulate what bothered the pair.

The problem is that entanglement violates how the world ought to work. Information can’t travel faster than the speed of light, for one. But in a 1935 paper, Einstein and his co-authors showed how entanglement leads to what’s now called quantum nonlocality, the eerie link that appears to exist between entangled particles. If two quantum systems meet and then separate, even across a distance of thousands of lightyears, it becomes impossible to measure the features of one system (such as its position, momentum and polarity) without instantly steering the other into a corresponding state.

Up to today, most experiments have tested entanglement over spatial gaps. The assumption is that the ‘nonlocal’ part of quantum nonlocality refers to the entanglement of properties across space. But what if entanglement also occurs across time? Is there such a thing as temporal nonlocality?

The answer, as it turns out, is "yes". Just when you thought quantum mechanics couldn’t get any weirder, a team of physicists at the Hebrew University of Jerusalem reported in 2013 that they had successfully entangled photons that never coexisted. Previous experiments involving a technique called ‘entanglement swapping’ had already showed quantum correlations across time, by delaying the measurement of one of the coexisting entangled particles; but Eli Megidish and his collaborators were the first to show entanglement between photons whose lifespans did not overlap at all.

Hello, quantum world! Inside a small laboratory in lush countryside about 50 miles north of New York City, an elaborate tangle of tubes and electronics dangles from the ceiling. This mess of equipment is a computer. Not just any computer, but one on the verge of passing what may, perhaps, go down as one of the most important milestones in the history of the field.

Quantum computers promise to run calculations far beyond the reach of any conventional supercomputer. They might revolutionize the discovery of new materials by making it possible to simulate the behavior of matter down to the atomic level. Or they could upend cryptography and security by cracking otherwise invincible codes. There is even hope they will supercharge artificial intelligence by crunching through data more efficiently.

Yet only now, after decades of gradual progress, are researchers finally close to building quantum computers powerful enough to do things that conventional computers cannot. It’s a landmark somewhat theatrically dubbed “quantum supremacy.” Google has been leading the charge toward this milestone, while Intel and Microsoft also have significant quantum efforts. And then there are well-funded startups including Rigetti Computing, IonQ, and Quantum Circuits.

“Nature is quantum, goddamn it! So if we want to simulate it, we need a quantum computer.” No other contender can match IBM’s pedigree in this area, though. Starting 50 years ago, the company produced advances in materials science that laid the foundations for the computer revolution. Which is why, last October, I found myself at IBM’s Thomas J. Watson Research Center to try to answer these questions: What, if anything, will a quantum computer be good for? And can a practical, reliable one even be built?

The more we study natural biological cells, the more we learn about how to control them or build artificial versions. These independent avenues of study have huge potential, but also their limitations. Researchers from Imperial College London have worked out a way to borrow the strengths of each, fusing together living and non-living cells to create tiny chemical factories that might one day aid drug delivery.

In past work, scientists have packaged proteins and enzymes inside artificial casings to better treat conditions like cancer or diabetes. Rather than just using some natural parts, the Imperial College study instead wrapped entire biological cells inside artificial ones. "Biological cells can perform extremely complex functions, but can be difficult to control when trying to harness one aspect," says Oscar Ces, lead researcher on the project. "Artificial cells can be programmed more easily but we cannot yet build in much complexity. Our new system bridges the gap between these two approaches by fusing whole biological cells with artificial ones, so that the machinery of both works in concert to produce what we need."

To pair up natural and artificial cells, the team used a microfluidic process to guide liquids very precisely through tiny channels. A liquid solution containing the biological cells was carefully pumped into a tube of oil, which forces the liquid into droplets surrounded by a lipid shell. Then, the droplets containing cells were dripped into a chamber where oil was floating on top of water. Their weight dragged them down into the watery solution, sealing them inside a bilayered bubble that could then be encased in the artificial cell wall.

The end result are hybrid cells, made up of an artificial shell containing a natural cell and enzymes. To test whether the living and non-living halves of the cell worked together, the team designed an experiment where the two parts would come together to produce a fluorescent chemical. Sure enough, a healthy glow indicated that all was in working order.

The asteroid, known as Bennu, is currently orbiting the Sun about 54 million miles from Earth. The 1,600-foot-wide, 74-billion-pound space object is probably not going to hit the Earth, but it’s not in the U.S. government’s nature to sit idle when a potential threat — no matter how unlikely — exists. NASA, the National Nuclear Security Administration, and two Energy Department weapons labs have come together to design spacecraft that could explode Bennu if it gets too close.

According to Buzzfeed News, the Hypervelocity Asteroid Mitigation Mission for Emergency Response spacecraft, HAMMER for short, could use one of two tactics to combat an impact. If an asteroid is small enough, HAMMER would use an 8.8-ton “impactor” to smash the object. But, if the asteroid is too big, the spacecraft would instead use an on-board nuclear device to blow it up.

Physicist David Dearborn from the Lawrence Livermore National Laboratory even suggested to Buzzfeed News that multiple HAMMER craft could throw themselves in front of the asteroid to slow it and change its course.

Though still a far-off fantasy in the minds of many, 19 states have passed legislation relating to autonomous vehicles — many starting small by defining terms like "automated driving system," "dynamic driving task" or "autonomous vehicle."

Additionally, governors from four states have issued executive orders creating councils and working groups of stakeholders and public officials dedicated to looking at how their states should proceed.

Where states like Florida have embraced fewer regulations, others, like California, have taken more tightly regulated approaches. Though these states have differed in their approaches, the future of transportation is in the midst of a revolution.

The revolution, in short, means that the traditional rules no longer hold up when applied to the rapidly advancing technology. From the electrification of vehicles to the growth of transportation network companies and automated driving, traditional driving regulations must be updated to keep pace.

Self-driving vehicles can already be spotted on test tracks across the country and on public streets in select cities, and several major companies including Ford, Toyota and BMW have all committed to driverless vehicles on American road within five years.

The dream of nuclear fusion is on the brink of being realised, according to a major new US initiative that says it will put fusion power on the grid within 15 years.

The project, a collaboration between scientists at MIT and a private company, will take a radically different approach to other efforts to transform fusion from an expensive science experiment into a viable commercial energy source. The team intend to use a new class of high-temperature superconductors they predict will allow them to create the world’s first fusion reactor that produces more energy than needs to be put in to get the fusion reaction going.

Bob Mumgaard, CEO of the private company Commonwealth Fusion Systems, which has attracted $50 million in support of this effort from the Italian energy company Eni, said: “The aspiration is to have a working power plant in time to combat climate change. We think we have the science, speed and scale to put carbon-free fusion power on the grid in 15 years.”

Computers that operate more like the human brain than computers—a field sometimes referred to as neuromorphic computing—have promised a new era of powerful computing.

While this all seems promising, one of the big shortcomings in neuromorphic computing has been that it doesn’t mimic the brain in a very important way. In the brain, for every neuron there are a thousand synapses—the electrical signal sent between the neurons of the brain. This poses a problem because a transistor only has a single terminal, hardly an accommodating architecture for multiplying signals.

Now researchers at Northwestern University, led by Mark Hersam, have developed a new device that combines memristors—two-terminal non-volatile memory devices based on resistance switching—with transistors to create what Hersam and his colleagues have dubbed a “memtransistor” that performs both memory storage and information processing.

While this work was recognized as mimicking the low-power computing of the human brain, critics didn’t really believe that it was acting like a neuron since it could only transmit a signal from one artificial neuron to another. This was far short of a human brain that is capable of making tens of thousands of such connections.

“Traditional memristors are two-terminal devices, whereas our memtransistors combine the non-volatility of a two-terminal memristor with the gate-tunability of a three-terminal transistor,” said Hersam to IEEE Spectrum. “Our device design accommodates additional terminals, which mimic the multiple synapses in neurons.”

Scientists have created the world’s first rechargeable proton battery, a crucial step towards cheaper and more environmentally-friendly energy storage. While the battery is just a small-scale prototype, it has the potential to be competitive with currently available lithium-ion batteries.

The rechargeable battery, created by researchers at RMIT university in Melbourne, uses carbon and water instead of lithium. The lead researcher Professor John Andrews said that as the world moved towards renewables, there would be a significant need for storage technologies that relied on cheap and abundant materials.

“Lithium-ion batteries are great but they rely on ultimately scarce and expensive resources,” he said. “Hydro is also a good technology but suitable sites are limited and the cost may be very high.

“The advantage is we’re going to be storing protons in a carbon-based material, which is abundant, and we are getting protons from water which is readily available.” The battery itself produces no carbon emissions and it can store electricity from zero-emissions renewables.

Andrews said it could be commercially available within five to 10 years. “When it is commercially available, it would be a competitor to the Tesla Powerwall and then eventually we’d hope we might find applications at the scale of the huge Tesla battery [in South Australia] and even larger.”

Imagine that you’re a voracious carnivore who sinks its teeth into the tail of a small reptile and anticipates a delicious lunch, when, in a flash, the reptile is gone and you are left holding a wiggling tail between your jaws.

A new study by the University of Toronto Mississauga research team led by Professor Robert Reisz and PhD student Aaron LeBlanc, published March 5 in the open source journal, Scientific Reports, shows how a group of small reptiles who lived 289 million years ago could detach their tails to escape the grasp of their would-be predators — the oldest known example of such behaviour. The reptiles, called Captorhinus, weighed less than 2 kilograms and were smaller than the predators of the time. They were abundant in terrestrial communities during the Early Permian period and are distant relatives of all the reptiles today.

As small omnivores and herbivores, Captorhinus and its relatives had to scrounge for food while avoiding being preyed upon by large meat-eating amphibians and ancient relatives of mammals. “One of the ways captorhinids could do this,” says first author LeBlanc, “was by having breakable tail vertebrae.” Like many present-day lizard species, such as skinks, that can detach their tails to escape or distract a predator, the middle of many tail vertebrae had cracks in them.

It is likely that these cracks acted like the perforated lines between two paper towel sheets, allowing vertebrae to break in half along planes of weakness. “If a predator grabbed hold of one of these reptiles, the vertebra would break at the crack and the tail would drop off, allowing the captorhinid to escape relatively unharmed,” says Reisz, a Distinguished Professor of Biology at the University of Toronto Mississauga.

Quantum mechanics has fundamental speed limits—upper bounds on the rate at which quantum systems can evolve. However, two groups working independently have published papers showing for the first time that quantum speed limits have a classical counterpart: classical speed limits. The results are surprising, as previous research has suggested that quantum speed limits are purely quantum in nature and vanish for classical systems.

Both groups—one consisting of Brendan Shanahan and Adolfo del Campo at the University of Massachusetts along with Aurelia Chenu and Norman Margolus at MIT, the other composed of Manaka Okuyama of the Tokyo Institute of Technology and Masayuki Ohzeki at Tohoku University—have published papers on classical speed limits in Physical Review Letters.

Over the past several decades, physicists have been investigating quantum speed limits, which determine the minimum time for a given process to occur in terms of the energy fluctuations of the process. A quantum speed limit can then be thought as a time-energy uncertainty relation. Although this concept is similar to Heisenberg's uncertainty principle, which relates position and momentum uncertainties, time is treated differently in quantum mechanics (as a parameter rather than an observable).

Still, the similarities between the two relations, along with the fact that Heisenberg's uncertainty principle is a strictly quantum phenomenon, have long suggested that quantum speed limits are likewise strictly quantum and have no classical counterpart. The only known limitation on the speed of classical systems is that objects may not travel faster than the speed of light due to special relativity, but this is unrelated to the energy-time relation in quantum speed limits.

The new papers show that speed limits based on a trade-off between energy and time do exist for classical systems, and in fact, that there are infinitely many of these classical speed limits. The results demonstrate that quantum speed limits are not based on any underlying quantum phenomena, but instead are a universal property of the description of any physical process, whether quantum or classical.

The hunt for habitability outside our solar system is both ongoing and won’t be slowing down anytime soon. The pursuit of exoplanets not only furthers the search for extraterrestrial life, but also helps us understand the formation and evolution of celestial objects, including those close to home. With the help of space- and ground-based telescopes, a group of researchers surveying red dwarf stars near Earth identified 15 new exoplanets — and one of them has the potential to host liquid water.

The team of researchers, led by Teruyuki Hirano of Tokyo Institute of Technology’s Department of Earth and Planetary Sciences, used data from NASA’s Kepler spacecraft and observations from Spain’s Nordic Optical Telescope and Hawaii’s Subaru Telescope to carry out the study. The findings were published in The Astronomical Journal in a series of two papers.

The “star” of the study is K2-155, a bright red dwarf about 200 light-years away. The researchers found three super-Earths (planets larger than Earth but smaller than Neptune) orbiting the star, with the farthest planet, K2-155d, potentially in its habitable zone. By measuring the radius of K2-155d, which is estimated at about 1.6 times that of Earth, and using a 3-D global climate simulation, they found it’s highly probable that liquid water could exist on its surface.

The team can’t say this with certainty, though, because the unmeasured radius and temperature of its host star could impact K2-155d’s habitability — habitability that also depends on the assumptions that go into the simulation. "In our simulations, the atmosphere and the composition of the planet were assumed to be Earth-like, and there's no guarantee that this is the case," said Hirano in a press release.

Gene editing techniques now are paving the way for an “off-the-shelf” CAR T-cell strategy for treatment of relapsed and refractory T-cell acute lymphoblastic leukemia (T-ALL) and non-Hodgkin T-cell lymphoma (T-NHL) without a requirement for autologous T cells. Researchers are now reporting in the journal Leukemia that they have used the gene-editing technology CRISPR to engineer human T cells that can attack human T-cell cancers.

The researchers engineered the T cells so any donor’s T cells could be used. A matched donor with similar immunity is not required and neither are the patient’s own T cells. “We were able to efficiently (> 90%) delete both copies of CD7 and the T-cell receptor alpha subunit (TRAC) and insert a unique CAR to CD7 in human T cells, which maintains normal killing function of these genetically manipulated human T cells in vitro and in vivo. We were able to show that these genetically modified CAR T cells kill both CD7+ human T-ALL cells and T-NHL in vitro and in vivo in immunodeficient mice,” said senior author John F. DiPersio, MD, PhD, who is a professor of medicine in oncology at Washington University School of Medicine in St. Louis, Missouri.

Dr. DiPersio’s team first generated a novel CAR T-cell strategy targeting CD7, allowing for the targeting and killing of all cells with CD7 on the surface. To prevent T-cell fratricide, the researchers used CRISPR/Cas9 gene editing to remove CD7 from healthy T cells. In addition, they used CRISPR gene editing to simultaneously eliminate the therapeutic T cells’ ability to see healthy tissues as foreign. “Our multiplex gene editing of CD7 and TRAC renders these T cells immune to fratricide and from causing graft versus host disease (GvHD). Thus, this represents the first “off-the-shelf” third party CAR T allowing for targeting of both T-ALL, T-NHL, and natural killer (NK) malignancies (also CD7+) without risk of fratricide or GvHD.” Dr. DiPersio told Cancer Network.

The researchers demonstrated that this approach is effective in mice with T-ALL taken from patients. Mice treated with the gene-edited T cells targeted to CD7 survived 65 days, compared with 31 days in a comparison group that received engineered T cells targeting a different protein. The researchers found no evidence of GvHD in the mice. In addition, the study revealed that the therapeutic T cells remained in the blood for at least 6 weeks after the initial injection, suggesting it could ramp up again to kill cancerous T cells if they return.

“The development of CAR T to T-cell and NK-cell malignancies has now been accomplished through the use of CRISPR/Cas9 gene editing and lentiviral gene transduction technologies. This provides the first pathway for overcoming major obstacles of targeting T-cell and NK-cell malignancies using cellular therapy,” said Dr. DiPersio, who is also the deputy director of Siteman Cancer Center. The researchers now hope to translate these findings into the clinic specifically for the treatment of children and adults with relapsed and refractory T-cell hematologic malignancies. The first clinical trial is set to begin in the next 12 to 18 months.

A new piece of research has identified over 500 genes that appear to be linked to sharp intelligence.

How much a person’s intelligence is governed by nature or nurture has been debated throughout the ages. A new piece of research has thrown some interesting evidence into the mix, identifying over 500 genes that appear to be linked to sharp intelligence.

The research is the largest study looking at how genes and intelligence are linked to date. Using the heaps of data gathered by the UK Biobank, scientists at the University of Edinburgh, the University of Southampton, and Harvard University compared DNA variants in over 248,000 people from across the world.

As they explain in the Nature journal Molecular Psychiatry, they managed to find 538 genes that play a role in intellectual ability, along with 187 regions in the human genome that are linked to cognitive skills.

In theory, this means that scientists could get an insight into your IQ just by analyzing your spit in a pot. As part of this new study, the researchers tested out this idea and managed to predict differences in intelligence of a group of individuals using their DNA alone.

“Our study identified a large number of genes linked to intelligence," Dr David Hill, from the University of Edinburgh's Centre for Cognitive Aging and Cognitive Epidemiology, said in a statement. "Importantly, we were also able to identify some of the biological processes that genetic variation appears to influence to produce such differences in intelligence, and we were also able to predict intelligence in another group using only their DNA.”

That said, the impact of genetics or environment on a person’s intelligence remains as hazy as ever. Their study was only able to predict 7 percent of the intelligence differences between those people, which is not totally definitive. “We know that environments and genes both contribute to the differences we observe in people’s intelligence," Professor Ian Deary, Principal Investigator, added.

"This study adds to what we know about which genes influence intelligence, and suggests that health and intelligence are related in part because some of the same genes influence them.”

So, don’t be too disheartened by the suggestion that some aspects of intelligence could be programmed into your DNA. Just as other scientific studies have suggested, it appears that the brilliance of your brain is also influenced by a cocktail of external influences, from your upbringing and life experiences, to even your health.

Touted as the next big tech disrupter in the world of business, blockchain technology could make transactions, from finance to food chain logistics, faster and more secure.

The first blockchain was invented in 2008 specifically to support the bitcoin cryptocurrency. Today, bitcoin owners can use the cryptocoin to purchase everything from plane tickets to electronics. While it may be a better alternative to fiat currency in many ways, the applications extend far beyond shopping.

Businesses are using blockchain in ways that benefit both their bottom line and their customers.Hyperledger, an open-source collaborative effort the Linux Foundation created to advance blockchain technologies, can streamline financial transactions and even track the food supply chain.

In 2017, Intel demonstrated how Hyperledger’s Sawtooth blockchain platform could be combined with the Internet of Things (IoT) to improve the seafood supply chain from ocean to table.

“Previously, the information available to any one company in the supply chain was fairly narrow,” said Reed. For example, a restaurant owner might know the date fish was ordered, when it’s due to arrive and the price paid, but they may have a difficult time tracking the fish en route from sea to table.

To improve the process, IoT sensors were affixed to the harvested fish to gauge its shipping location, temperature during transport, movement and more. Using Hyperledger Sawtooth, anyone along the supply chain has the ability to keep better track of the fish. “Blockchain implementation allowed us to know for certain when the fisherman caught the fish, when they stored it, its temperature at any time, when it was inspected and exactly when it arrived at a restaurant,” said Reed.

Eventually, restaurants and grocers could use a similar blockchain implementation to track meat and produce. Using traditional supply chain protocols, contaminated food is very difficult to track. However, with blockchain a grocery store could know within minutes instead of days exactly which products to pull from shelves, potentially stopping contaminated food from ever reaching shelves in the first place.

Consumers can be privy to a product’s entire history on a blockchain, perhaps through a simple scan of a QR code on a package, said Reed. The consumer would know more about what they were eating, and the price they paid would be more likely to accurately reflect the quality of the product.

In a move that could give a voice to the 300,000 people around the world who have had their larynx removed due to cancer, scientists at the MARCS Institute at Western Sydney University have tested a non-invasive artificial larynx and found it capable of generating a high-quality voice. Unlike existing prosthetics that rely on input from the nerves or muscles of the larynx, the Pneumatic Artificial Larynx (PAL) device uses the patient's respiratory system and doesn't need to be surgically implanted.

"The existing standard of care requires the surgical application of prosthetic devices into the open wound in the neck, known as the 'stoma', which is left after a laryngectomy so that a patient can breathe," says Postdoctoral Research Fellow, Dr Farzaneh Ahmadi. "The surgery is invasive; infections and complications are common; and the resulting voice is often hoarse and whispery."

In a pre-clinical trial, researchers working on The Bionic Voice research project developed an electronic adaption of a PAL device called the Pneumatic Bionic Voice (PBV), which uses the patient's breath to create a humming sound that is then converted to speech with movement of the lips and tongue. The study found that a device exclusively driven by respiration could in fact aid in recreating the function of the larynx without any nerve input and produce a quality of voice better than the existing gold standard.

The device tested features a tube that goes from the stoma to the mouth and is cumbersome, as can be seen in the above image.

However, the team has plans to develop a functional PBV prosthesis in the form of a "control unit" that can be applied over the stoma, and a "voice source" unit that sits on the roof of the mouth. Dr Ahmadi claims this device would be the first non-invasive, non-surgical, electronic voice prosthesis in the world and would be capable of producing more human-sounding speech than current devices.

In order to have their vital signs continuously monitored, patients in emergency rooms have to be hooked up to a variety of sensors – this makes it awkward for them to move around, among other things. Soon, however, all those machines could be replaced by one small electronic patch that adheres to their chest.

The device was developed by Swiss startup Smartcardia, a company that was spun off from the EPFL research institute.

Applied to a patient's chest under their clothes, the patch uses integrated sensors to monitor stats such as temperature, pulse, blood pressure, blood oxygen levels, cardiac rhythm and cardiac electrical activity – it reportedly does so just as accurately as traditional cable-based sensors.

The data is wirelessly transmitted to a server, which doctors can access in real time via a smartphone, tablet, or any internet-connected device. Along with its use in hospitals, the patch could also allow patients to be remotely monitored while in their homes, going about their daily activities. This would minimize the number of visits that they would need to pay to the hospital just to get checked, while ensuring that any problems got detected right away.

To that end, Smartcardia is also working on an artificial intelligence system that would allow the patch to spot health problems early. It would do so by detecting slight changes in a patient's vital signs and linking them together, to see if the combination corresponded to existing models of serious conditions.

The Smartcardia patch has already been tested on hundreds of patients at several hospitals, and recently received the European Union's CE marking for medical devices. Large-scale production has begun, and a commercial launch for the Swiss and EU markets should reportedly be taking place quite soon.

The faithful inheritance of the epigenome is critical for cells to maintain gene expression programs and cellular identity across cell divisions. A team of scientists now mapped strand-specific DNA methylation after replication forks and show maintenance of the vast majority of the DNA methylome within 20 minutes of replication and inheritance of some hemi-methylated CpG dinucleotides (hemiCpGs). Mapping the nascent DNA methylome targeted by each of the three DNA methyltransferases (DNMTs) reveals interactions between DNMTs and substrate daughter cytosines en route to maintenance methylation or hemimethylation. Finally, the scientists show the inheritance of hemi-CpGs at short regions flanking CCCTC-binding factor (CTCF)/cohesin binding sites in pluripotent cells. Elimination of hemimethylation causes reduced frequency of chromatin interactions emanating from these sites, suggesting a role for hemimethylation as a stable epigenetic mark regulating CTCF-mediated chromatin interactions.

CTCF is a highly conserved zinc finger protein and is best known as a transcription factor. It can function as a transcriptional activator, a repressor or an insulator protein, blocking the communication between enhancers and promoters. CTCF can also recruit other transcription factors while bound to chromatin domain boundaries. The three-dimensional organization of the eukaryotic genome dictates its function, and CTCF serves as one of the core architectural proteins that help establish this organization. The mapping of CTCF-binding sites in diverse species has revealed that the genome is covered with CTCF-binding sites.

CTCF is a member of the BORIS - CTCF gene family and encodes a transcriptional regulator protein with 11 highly conserved zinc finger (ZF) domains. This nuclear protein is able to use different combinations of the ZF domains to bind different DNA target sequences and proteins.

It’s been just over a week since the first massive memcache-fueled denial of service attack. The code’s authors says it’s being released “to bring more attention to the flaw and force others into updating their devices.”

The era of terabit DDoS attacks was ushered in this month with giant denial of service attacks last week set records with 1.35-terabit-per-second and 1.7 -terabit-per-second attacks. They used unsecured memcached servers to launch the attacks, one of which targeted GitHub itself. The latter attack targeted an unnamed U.S. service provider, according to Arbor Networks.

A second tool was released on Monday, BleepingComputer reports, but the author is unknown. Akamai and Cloudflare predicted more attacks following the record-setting efforts. Cloudflare CEO Matthew Prince said he was seeing separate attacks of a similar size last week.

Much the way ships form bow waves as they move through water, CMEs set off interplanetary shocks when they erupt from the Sun at extreme speeds, propelling a wave of high-energy particles. These particles can spark space weather events around Earth, endangering spacecraft and astronauts.

Understanding a shock’s structure — particularly how it develops and accelerates — is key to predicting how it might disrupt near-Earth space. But without a vast array of sensors scattered through space, these things are impossible to measure directly. Instead, scientists rely upon models that use satellite observations of the CME to simulate the ensuing shock’s behavior.

The scientists — Ryun-Young Kwon, a solar physicist at George Mason University in Fairfax, Virginia, and Johns Hopkins University Applied Physics Laboratory, or APL, in Laurel, Maryland, and APL astrophysicist Angelos Vourlidas — pulled observations of two different eruptions from three spacecraft: ESA/NASA’s Solar and Heliospheric Observatory, or SOHO, and NASA’s twin Solar Terrestrial Relations Observatory, or STEREO, satellites. One CME erupted in March 2011 and the second, in February 2014.

The scientists fit the CME data to their models — one called the “croissant” model for the shape of nascent shocks, and the other the “ellipsoid” model for the shape of expanding shocks — to uncover the 3-D structure and trajectory of each CME and shock.

Each spacecraft’s observations alone weren’t sufficient to model the shocks. But with three sets of eyes on the eruption, each of them spaced nearly evenly around the Sun, the scientists could use their models to recreate a 3-D view. Their work confirmed long-held theoretical predictions of a strong shock near the CME nose and a weaker shock at the sides.

A team of researchers from several institutions across the U.S. has found evidence suggesting that there was an explosion of diversity in fish after the end-Cretaceous mass extinction. In their paper published in the journal Nature Ecology and Evolution, the team describes their genetic study involving more than 1800 species of fish and what they found.

After the end-Cretaceous mass extinction—the one that killed off the dinosaurs—mammals became much more diverse and dominant. Without the dinosaurs to feast on them, they were free to prosper. Much less is known about what went on in the oceans. In this new effort, the researchers have added some new pieces to that puzzle.

Prior research has suggested that the asteroid or comet that smashed into the Earth approximately 65 million years ago killed off more than the dinosaurs—approximately 50 percent of all species worldwide disappeared. These include many sharks and other reptiles, leaving a void of sorts in the world's oceans that allowed fish to flourish. And flourish they did, according to the researchers with this new effort.

To learn more about what happened with sea creatures after the end-Cretaceous mass extinction, the researchers collected tissue samples from 118 acanthomorph species, looking specifically at 1,000 DNA sequences that were similar across the genomes of their samples—as part of that effort, they searched for variations in genetic sequences that offered clues regarding how closely related the fish were to one another.

The researchers found that six large groups of fish originated over the course of 10 million years after the mass extinction. Of those groups, five were acanthomorphs (spiny-rayed fish). Today, there are approximately 18,000 members of the acanthomorphs species and they represent approximately one in three vertebrate species alive today. That so many of them originated in the time after the dinosaurs disappeared shows that they, like mammals, found the world a much friendlier place—one where they were allowed to prosper. Such an explosion suggests that sharks and other reptile populations and their diversity must have plunged, leaving a vast void for the acanthomorphs to fill. Still unclear is why acanthomorphs, rather than other fishspecies, became so dominant.

Artificial intelligence is sweeping industries like medicine and finance. What if the machine you’re reading this on could learn too?

In a recent paper, Google describes using deep learning—an AI method that employs a large simulated neural network—to improve prefetching. Although the researchers haven’t shown how much this speeds things up, the boost could be big, given what deep learning has brought to other tasks.

“The work that we did is only the tip of the iceberg,” says Heiner Litz of the University of California, Santa Cruz, a visiting researcher on the project. Litz believes it should be possible to apply machine learning to every part of a computer, from the low-level operating system to the software that users interact with.

Such advances would be opportune. Moore’s Law is finally slowing down, and the fundamental design of computer chips hasn’t changed much in recent years. Tim Kraska, an associate professor at MIT who is also exploring how machine learning can make computers work better, says the approach could be useful for high-level algorithms, too. A database might automatically learn how to handle financial data as opposed to social-network data, for instance. Or an application could teach itself to respond to a particular user’s habits more effectively.

“We tend to build general-purpose systems and hardware,” Kraska says. “Machine learning makes it possible that the system is automatically customized, to its core, to the specific data and access patterns of a user.”

Kraska cautions that using machine learning remains computationally expensive, so computer systems won’t change overnight. “However, if it is possible to overcome these limitations,” he says, “the way we develop systems might fundamentally change in the future.”

Litz is more optimistic. “The grand vision is a system that is constantly monitoring itself and learning,” he says. “It’s really the start of something really big.”

Two teams of researchers working independently of one another have come up with an experiment designed to prove that gravity and quantum mechanics can be reconciled. The first team is a pairing of Chiara Marletto of the University of Oxford and Vlatko Vedral of National University of Singapore. The second is an international collaboration. In the papers, both published in Physical Review Letters, the teams describe their experiment and how it might be carried out.

Gravity is a tough nut to crack, there is just no doubt about it. In comparison, the strong, weak and electromagnetic forces are a walk in the park. Scientists still can't explain the nature of gravity, though how it works is rather well understood. The current best theory regarding gravity goes all the way back to Einstein's general theory of relativity, but there has been no way to reconcile it with quantum mechanics. Some physicists suggest it could be a particle called the graviton. But proving that such a particle exists has been frustrating, because it would be so weak that it would be very nearly impossible to measure its force. In this new effort, neither team is suggesting that their experiment could reconcile gravity and quantum mechanics. Instead, they are claiming that if such an experiment is successful, it would very nearly prove that it should be possible to do it.

The experiment essentially involves attempting to entangle two particles using their gravitational attraction as a means of confirming quantum gravity. In practice, it would consist of levitating two tiny diamonds a small distance from one another and putting each of them into a superposition of two spin directions. After that, a magnetic field would be applied to separate the spin components. At this point, a test would be made to see if each of the components is gravitationally attracted. If they are, the researchers contend, that will prove that gravity is quantum; if they are not, then it will not.

The experiment would have to run many times to get an accurate assessment. And while a first look might suggest such an experiment could be conducted very soon, the opposite is actually true. The researchers suggest it will likely be a decade before such an experiment could be carried out due to the necessity of improving scale and the sensitivity involved in such an experiment.

Stephen Hawking, whose brilliant mind ranged across time and space though his body was paralyzed by disease, died Wednesday. He was 76. Hawking died at his home in Cambridge, England, according to a statement by the University of Cambridge. The best-known theoretical physicist of his time, Hawking wrote so lucidly of the mysteries of space, time and black holes that his book, "A Brief History of Time," became an international best-seller, making him one of science's biggest celebrities since Albert Einstein.

"He was a great scientist and an extraordinary man whose work and legacy will live on for many years," his children Lucy, Robert and Tim said in a statement. "His courage and persistence with his brilliance and humor inspired people across the world. He once said, 'It would not be much of a universe if it wasn't home to the people you love.' We will miss him forever."

Even though his body was attacked by amyotrophic lateral sclerosis, or ALS, when Hawking was 21, he stunned doctors by living with the normally fatal illness for more than 50 years. A severe attack of pneumonia in 1985 left him breathing through a tube, forcing him to communicate through an electronic voice synthesizer that gave him his distinctive robotic monotone.

But he continued his scientific work, appeared on television and married for a second time. As one of Isaac Newton's successors as Lucasian Professor of Mathematics at Cambridge University, Hawking was involved in the search for the great goal of physics—a "unified theory."

A solar panel that can generate electricity from falling raindrops has been invented, enabling power to flow even when skies cloud over or the sun has set. Solar power installation is soaring globally thanks to costs plunging 90% in the past decade, making it the cheapest electricity in many parts of the world. But the power output can plummet under grey skies and researchers are working to squeeze even more electricity from panels.

The new device, demonstrated in a laboratory at Soochow University in China, places two transparent polymer layers on top of a solar photovoltaic (PV) cell. When raindrops fall on to the layers and then roll off, the friction generates a static electricity charge.

“Our device can always generate electricity in any daytime weather,” said Baoquan Sun, at Soochow University. “In addition, this device even provides electricity at night if there is rain.”

Other researchers have recently created similar devices on solar panels, known as triboelectric nanogenerators (Tengs), but the new design is significantly simpler and more efficient as one of the polymer layers acts as the electrode for both the Teng and the solar cell.

“Due to our unique device design, it becomes a lightweight device,” said Sun, whose team’s work is published in the journal ACS Nano. “In future, we are exploring integrating these into mobile and flexible devices, such as electronic clothes. However, the output power efficiency needs to be further improved before practical application.”

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.