The supervised deep-learning drug-discovery engine used the properties of small molecules, transcriptional data, and literature to predict efficacy, toxicity, tissue-specificity, and heterogeneity of response.

“We used LINCS data from Broad Institute to determine the effects on cell lines before and after incubation with compounds, co-author and research scientist Polina Mamoshina explained to KurzweilIAI.

“We used gene expression data of total mRNA from cell lines extracted and measured before incubation with compound X and after incubation with compound X to identify the response on a molecular level. The goal is to understand how gene expression (the transcriptome) will change after drug uptake. It is a differential value, so we need a reference (molecular state before incubation) to compare.”

The research is described in a paper in the upcoming issue of the journal Molecular Pharmaceutics.

Helping pharmas accelerate R&D

Alex Zhavoronkov, PhD, Insilico Medicine CEO, who coordinated the study, said the initial goal of their research was to help pharmaceutical companies significantly accelerate their R&D and increase the number of approved drugs. “In the process we came up with more than 800 strong hypotheses in oncology, cardiovascular, metabolic, and CNS spaces and started basic validation,” he said.

The team measured the “differential signaling pathway activation score for a large number of pathways to reduce the dimensionality of the data while retaining biological relevance.” They then used those scores to train the deep neural networks.*

“This study is a proof of concept that DNNs can be used to annotate drugs using transcriptional response signatures, but we took this concept to the next level,” said Alex Aliper, president of research, Insilico Medicine, Inc., lead author of the study.

Via Pharma.AI, a newly formed subsidiary of Insilico Medicine, “we developed a pipeline for in silico drug discovery — which has the potential to substantially accelerate the preclinical stage for almost any therapeutic — and came up with a broad list of predictions, with multiple in silico validation steps that, if validated in vitro and in vivo, can almost double the number of drugs in clinical practice.”

Despite the commercial orientation of the companies, the authors agreed not to file for intellectual property on these methods and to publish the proof of concept.

Deep-learning age biomarkers

According to Mamoshina, earlier this month, Insilico Medicine scientists published the first deep-learned biomarker of human age — aiming to predict the health status of the patient — in a paper titled “Deep biomarkers of human aging: Application of deep neural networks to biomarker development” by Putin et al, in Aging; and an overview of recent advances in deep learning in a paper titled “Applications of Deep Learning in Biomedicine” by Mamoshina et al., also in Molecular Pharmaceutics.

Insilico Medicine is located in the Emerging Technology Centers at Johns Hopkins University in Baltimore, Maryland, in collaboration with Datalytic Solutions and Mind Research Network.

* In this study, scientists used the perturbation samples of 678 drugs across A549, MCF-7 and PC-3 cell lines from the Library of Integrated Network-Based Cellular Signatures (LINCS) project developed by the National Institutes of Health (NIH) and linked those to 12 therapeutic use categories derived from MeSH (Medical Subject Headings) developed and maintained by the National Library of Medicine (NLM) of the NIH.

To train the DNN, scientists utilized both gene level transcriptomic data and transcriptomic data processed using a pathway activation scoring algorithm, for a pooled dataset of samples perturbed with different concentrations of the drug for 6 and 24 hours. Cross-validation experiments showed that DNNs achieve 54.6% accuracy in correctly predicting one out of 12 therapeutic classes for each drug.

One peculiar finding of this experiment was that a large number of drugs misclassified by the DNNs had dual use, suggesting possible application of DNN confusion matrices in drug repurposing.

Deep learning is rapidly advancing many areas of science and technology with multiple success stories in image, text, voice and video recognition, robotics and autonomous driving. In this paper we demonstrate how deep neural networks (DNN) trained on large transcriptional response data sets can classify various drugs to therapeutic categories solely based on their transcriptional profiles. We used the perturbation samples of 678 drugs across A549, MCF-7 and PC-3 cell lines from the LINCS project and linked those to 12 therapeutic use categories derived from MeSH. To train the DNN, we utilized both gene level transcriptomic data and transcriptomic data processed using a pathway activation scoring algorithm, for a pooled dataset of samples perturbed with different concentrations of the drug for 6 and 24 hours. When applied to normalized gene expression data for “landmark genes,” DNN showed cross-validation mean F1 scores of 0.397, 0.285 and 0.234 on 3-, 5- and 12-category classification problems, respectively. At the pathway level DNN performed best with cross-validation mean F1 scores of 0.701, 0.596 and 0.546 on the same tasks. In both gene and pathway level classification, DNN convincingly outperformed support vector machine (SVM) model on every multiclass classification problem. For the first time we demonstrate a deep learning neural net trained on transcriptomic data to recognize pharmacological properties of multiple drugs across different biological systems and conditions. We also propose using deep neural net confusion matrices for drug repositioning. This work is a proof of principle for applying deep learning to drug discovery and development.

The boldfaced line, known as a spanning tree, follows the desired geometric shape of the target DNA origami design method, touching each vertex just once. A spanning tree algorithm is used to map out the proper routing path for the DNA strand. (credit: Public Domain)

MIT, Baylor College of Medicine, and Arizona State University Biodesign Institute researchers have developed a radical new top-down DNA origami* design method based on a computer algorithm that allows for creating designs for DNA nanostructures by simply inputting a target shape.

DNA origami (using DNA to design and build geometric structures) has already proven wildly successful in creating myriad forms in 2- and 3- dimensions, which conveniently self-assemble when the designed DNA sequences are mixed together. The tricky part is preparing the proper DNA sequence and routing design for scaffolding and staple strands to achieve the desired target structure. Typically, this is painstaking work that must be carried out manually.

The new algorithm, which is reported together with a novel synthesis approach in the journal Science, promises to eliminate all that and expands the range of possible applications of DNA origami in biomolecular science and nanotechnology. Think nanoparticles for drug delivery and cell targeting, nanoscale robots in medicine and industry, custom-tailored optical devices, and most interesting: DNA as a storage medium, offering retention times in the millions of years.**

Shape-shifting, top-down software

Unlike traditional DNA origami, in which the structure is built up manually by hand, the team’s radical top-down autonomous design method begins with an outline of the desired form and works backward in stages to define the required DNA sequence that will properly fold to form the finished product.

“The Science paper turns the problem around from one in which an expert designs the DNA needed to synthesize the object, to one in which the object itself is the starting point, with the DNA sequences that are needed automatically defined by the algorithm,” said Mark Bathe, an associate professor of biological engineering at MIT, who led the research. “Our hope is that this automation significantly broadens participation of others in the use of this powerful molecular design paradigm.”

The algorithm, which is known as DAEDALUS (DNA Origami Sequence Design Algorithm for User-defined Structures) after the Greek craftsman and artist who designed labyrinths that resemble origami’s complex scaffold structures, can build any type of 3-D shape, provided it has a closed surface. This can include shapes with one or more holes, such as a torus.

A simplified version of the top-down procedure used to design scaffolded DNA origami nanostructures. It starts with a polygon corresponding to the target shape. Software translates a wireframe version of this structure into a plan for routing DNA scaffold and staple strands. That enables a 3D DNA-based atomic-level structural model that is then validated using 3D cryo-EM reconstruction. (credit: adapted from Biodesign Institute images)

With the new technique, the target geometric structure is first described in terms of a wire mesh made up of polyhedra, with a network of nodes and edges. A DNA scaffold using strands of custom length and sequence is generated, using a “spanning tree” algorithm — basically a map that will automatically guide the routing of the DNA scaffold strand through the entire origami structure, touching each vertex in the geometric form once. Complementary staple strands are then assigned and the final DNA structural model or nanoparticle self-assembles, and is then validated using 3D cryo-EM reconstruction.

The software allows for fabricating a variety of geometric DNA objects, including 35 polyhedral forms (Platonic, Archimedean, Johnson and Catalan solids), six asymmetric structures, and four polyhedra with nonspherical topology, using inverse design principles — no manual base-pair designs needed.

To test the method, simpler forms known as Platonic solids were first fabricated, followed by increasingly complex structures. These included objects with nonspherical topologies and unusual internal details, which had never been experimentally realized before. Further experiments confirmed that the DNA structures produced were potentially suitable for biological applications since they displayed long-term stability in serum and low-salt conditions.

Biological research uses

The research also paves the way for designing nanoscale systems mimicking the properties of viruses, photosynthetic organisms, and other sophisticated products of natural evolution. One such application is a scaffold for viral peptides and proteins for use as vaccines. The surface of the nanoparticles could be designed with any combination of peptides and proteins, located at any desired location on the structure, in order to mimic the way in which a virus appears to the body’s immune system.

The researchers demonstrated that the DNA nanoparticles are stable for more than six hours in serum, and are now attempting to increase their stability further.

The nanoparticles could also be used to encapsulate the CRISPR-Cas9 gene editing tool. The CRISPR-Cas9 tool has enormous potential in therapeutics, thanks to its ability to edit targeted genes. However, there is a significant need to develop techniques to package the tool and deliver it to specific cells within the body, Bathe says.

This is currently done using viruses, but these are limited in the size of package they can carry, restricting their use. The DNA nanoparticles, in contrast, are capable of carrying much larger gene packages and can easily be equipped with molecules that help target the right cells or tissue.

The most exciting aspect of the work, however, is that it should significantly broaden participation in the application of this technology, Bathe says, much like 3-D printing has done for complex 3-D geometric models at the macroscopic scale.

* DNA origami brings the ancient Japanese method of paper folding down to the molecular scale. The basics are simple: Take a length of single-stranded DNA and guide it into a desired shape, fastening the structure together using shorter “staple strands,” which bind in strategic places along the longer length of DNA. The method relies on the fact that DNA’s four nucleotide letters—A, T, C, & G stick together in a consistent manner — As always pairing with Ts and Cs with Gs.

The DNA molecule in its characteristic double stranded form is fairly stiff, compared with single-stranded DNA, which is flexible. For this reason, single stranded DNA makes for an ideal lace-like scaffold material. Further, its pairing properties are predictable and consistent (unlike RNA).

** A single gram of DNA can store about 700 terabytes of information — an amount equivalent to 14,000 50-gigabyte Blu-ray disks — and could potentially be operated with a fraction of the energy required for other information storage options.

Biodesign Institute at ASU | DNA Origami

Abstract of Designer nanoscale DNA assemblies programmed from the top down

Scaffolded DNA origami is a versatile means of synthesizing complex molecular architectures. However, the approach is limited by the need to forward-design specific Watson-Crick base-pairing manually for any given target structure. Here, we report a general, top-down strategy to design nearly arbitrary DNA architectures autonomously based only on target shape. Objects are represented as closed surfaces rendered as polyhedral networks of parallel DNA duplexes, which enables complete DNA scaffold routing with a spanning tree algorithm. The asymmetric polymerase chain reaction was applied to produce stable, monodisperse assemblies with custom scaffold length and sequence that are verified structurally in 3D to be high fidelity using single-particle cryo-electron microscopy. Their long-term stability in serum and low-salt buffer confirms their utility for biological as well as nonbiological applications.

Researchers at the Walter and Eliza Hall Institute in Australia have discovered a new way to trigger cell death that could lead to drugs to treat cancer and autoimmune disease.

Programmed cell death (a.k.a. apoptosis) is a natural process that removes unwanted cells from the body. Failure of apoptosis can allow cancer cells to grow unchecked or immune cells to inappropriately attack the body.

The protein known as Bak is central to apoptosis. In healthy cells, Bak sits in an inert state but when a cell receives a signal to die, Bak transforms into a killer protein that destroys the cell.

Triggering the cancer-apoptosis trigger

Institute researchers Sweta Iyer, PhD, Ruth Kluck, PhD, and colleagues unexpectedly discovered that an antibody they had produced to study Bak actually bound to the Bak protein and triggered its activation. They hope to use this discovery to develop drugs that promote cell death.

The researchers used information about Bak’s three-dimensional structure to find out precisely how the antibody activated Bak. “It is well known that Bak can be activated by a class of proteins called ‘BH3-only proteins’ that bind to a groove on Bak. We were surprised to find that despite our antibody binding to a completely different site on Bak, it could still trigger activation,” Kluck said. “The advantage of our antibody is that it can’t be ‘mopped up’ and neutralized by pro-survival proteins in the cell, potentially reducing the chance of drug resistance occurring.”

Drugs that target this new activation site could be useful in combination with other therapies that promote cell death by mimicking the BH3-only proteins. The researchers are now working with collaborators to develop their antibody into a drug that can access Bak inside cells.

Their findings have just been published in the open-access journal Nature Communications. The research was supported by the National Health and Medical Research Council, the Australian Research Council, the Victorian State Government Operational Infrastructure Support Scheme, and the Victorian Life Science Computation Initiative.

Abstract of Identification of an activation site in Bak and mitochondrial Bax triggered by antibodies

During apoptosis, Bak and Bax are activated by BH3-only proteins binding to the α2–α5 hydrophobic groove; Bax is also activated via a rear pocket. Here we report that antibodies can directly activate Bak and mitochondrial Bax by binding to the α1–α2 loop. A monoclonal antibody (clone 7D10) binds close to α1 in non-activated Bak to induce conformational change, oligomerization, and cytochrome c release. Anti-FLAG antibodies also activate Bak containing a FLAG epitope close to α1. An antibody (clone 3C10) to the Bax α1–α2 loop activates mitochondrial Bax, but blocks translocation of cytosolic Bax. Tethers within Bak show that 7D10 binding directly extricates α1; a structural model of the 7D10 Fab bound to Bak reveals the formation of a cavity under α1. Our identification of the α1–α2 loop as an activation site in Bak paves the way to develop intrabodies or small molecules that directly and selectively regulate these proteins.

Researchers have developed a new method for doping (integrating elements to change a semiconductor’s properties) single crystals of diamond with boron at relatively low temperatures, without degradation.

Diamonds have properties that could make them ideal semiconductors for power electronics. They can handle high voltages and power, and electrical currents also flow through diamonds quickly, meaning the material would make for energy-efficient devices. And they are thermally conductive, which means diamond-based devices would dissipate heat quickly and easily (no need for bulky, expensive cooling methods). However. diamond’s rigid crystalline structure makes doping difficult.*

They discovered that if you bond a single-crystal diamond with a piece of silicon doped with boron, and heat it to 800 degrees Celsius (low compared to conventional techniques), the boron atoms will migrate from the silicon to the diamond. It turns out that the boron-doped silicon has defects such as vacancies, where an atom is missing in the lattice structure. Carbon atoms from the diamond will fill those vacancies, leaving empty spots for boron atoms.

This technique also allows for selective doping, which means more control when making devices. You can choose where to dope a single-crystal diamond simply by bonding the silicon to that spot.

The new method currently only works for P-type doping, where the semiconductor is doped with an element that provides positive charge carriers (in this case, the absence of electrons, called holes). The researchers are already working on a simple device using P-type single-crystal diamond semiconductors.

But to make electronic devices like transistors, you need N-type doping, which gives the semiconductor negative charge carriers (electrons). And other barriers remain: diamond is expensive and single crystals are very small.

Still, Ma says, achieving P-type doping is an important step, and might inspire others to find solutions for the remaining challenges. Eventually, he said, single-crystal diamond could be useful everywhere — perfect, for instance, for controlling power in the electrical grid.

* Currently, you can dope diamond by coating the crystal with boron and heating it to 1450 degrees Celsius. But it’s difficult to remove the boron coating at the end. This method only works on diamonds consisting of multiple crystals stuck together. Because such polydiamonds have irregularities between the crystals, single crystals would be superior semiconductors. You can dope single crystals by injecting boron atoms while growing the crystals artificially. The problem is the process requires powerful microwaves that can degrade the quality of the crystal.

With the best overall electronic and thermal properties, single crystal diamond (SCD) is the extreme wide bandgap material that is expected to revolutionize power electronics and radio-frequency electronics in the future. However, turning SCD into useful semiconductors requires overcoming doping challenges, as conventional substitutional doping techniques, such as thermal diffusion and ion implantation, are not easily applicable to SCD. Here we report a simple and easily accessible doping strategy demonstrating that electrically activated, substitutional doping in SCD without inducing graphitization transition or lattice damage can be readily realized with thermal diffusion at relatively low temperatures by using heavily dopedSi nanomembranes as a unique dopant carrying medium. Atomistic simulations elucidate a vacancyexchange boron doping mechanism that occurs at the bonded interface between Si and diamond. We further demonstrate selectively doped high voltage diodes and half-wave rectifier circuits using such dopedSCD. Our new doping strategy has established a reachable path toward using SCDs for future high voltage power conversion systems and for other novel diamond based electronic devices. The novel dopingmechanism may find its critical use in other wide bandgap semiconductors.

The LIGR-seq method for global-scale mapping of RNA-RNA interactions in vivo to reveal unexpected functions for uncharacterized RNAs that act via base-pairing interactions (credit: University of Toronto)

What used to be dismissed by many as “junk DNA” has now become vitally important, as accelerating genomic data points to the importance of non-coding RNAs (ncRNAs) — a genome’s messages that do not specifically code for proteins — in development and disease.

But our progress in understanding these molecules has been slow because of the lack of technologies that allow for systematic mapping of their functions.

Now, professor Benjamin Blencowe’s team at the University of Toronto’s Donnelly Centre has developed a method called “LIGR-seq” that enables scientists to explore in depth what ncRNAs do in human cells.

The study, described in Molecular Cell, was published on May 19, along with two other papers, in Molecular Cell and Cell, respectively, from Yue Wan’s group at the Genome Institute of Singapore and Howard Chang’s group at Stanford University in California, who developed similar methods to study RNAs in different organisms.

So what exactly do ncRNAs do?

mRNAs vs. ncRNAs (credit: Thomas Shafee/CC)

Of the 3 billion letters in the human genome, only two per cent make up the protein-coding genes. The genes are copied, or transcribed, into messenger RNA (mRNA) molecules, which provide templates for building proteins that do most of the work in the cell. Much of the remaining 98 per cent of the genome was initially considered by some as lacking in functional importance. However, large swaths of the non-coding genome — between half and three quarters of it — are also copied into RNA.

So then what might the resulting ncRNAs do? That depends on whom you ask. Some researchers believe that most ncRNAs have no function, that they are just a by-product of the genome’s powerful transcription machinery that makes mRNA. However, it is emerging that many ncRNAs do have important roles in gene regulation — some ncRNAs act as carriages for shuttling the mRNAs around the cell, or provide a scaffold for other proteins and RNAs to attach to and do their jobs.

But the majority of available data has trickled in piecemeal or through serendipitous discovery. And with emerging evidence that ncRNAs could drive disease progression, such as cancer metastasis, there was a great need for a technology that would allow a systematic functional analysis of ncRNAs.

“Up until now, with existing methods, you had to know what you are looking for because they all require you to have some information about the RNA of interest. The power of our method is that you don’t need to preselect your candidates; you can see what’s occurring globally in cells, and use that information to look at interesting things we have not seen before and how they are affecting biology,” says Eesha Sharma, a PhD candidate in Blencowe’s group who, along with postdoctoral fellow Tim Sterne-Weiler, co-developed the method.

A new ncRNA identification tool

The human RNA-RNA interactome, showing interactions detected by LIGR-seq (credit: University of Toronto)

The new ‘‘LIGation of interacting RNA and high-throughput sequencing’’ (LIGR-seq) tool captures interactions between different RNA molecules. When two RNA molecules have matching sequences — strings of letters copied from the DNA blueprint — they will stick together like Velcro. With LIGR-seq, the paired RNA structures are removed from cells and analyzed by state-of-the-art sequencing methods to precisely identify the RNAs that are stuck together.

“Most researchers in the life sciences agree that there’s an urgent need to understand what ncRNAs do. This technology will open the door to developing a new understanding of ncRNA function,” says Blencowe, who is also a professor in the Department of Molecular Genetics.

Not having to rely on pre-existing knowledge will boost the discovery of RNA pairs that have never been seen before. Scientists can also, for the first time, look at RNA interactions as they occur in living cells, in all their complexity, unlike in the juices of mashed up cells that they had to rely on before. This is a bit like moving on to explore marine biology from collecting shells on the beach to scuba-diving among the coral reefs, where the scope for discovery is so much bigger.

Actually, ncRNAs come in multiple flavors: there’s rRNA, tRNA, snRNA, snoRNA, piRNA, miRNA, and lncRNA, to name a few, where prefixes reflect the RNA’s place in the cell or some aspect of its function. But the truth is that no one really knows the extent to which these ncRNAs control what goes on in the cell, or how they do this.

Discoveries

Nonetheless, the new technology developed by Blencowe’s group has been able to pick up new interactions involving all classes of RNAs and has already revealed some unexpected findings.

The team discovered new roles for small nucleolar RNAs (snoRNAs), which normally guide chemical modifications of other ncRNAs. It turns out that some snoRNAs can also regulate stability of a set of protein-coding mRNAs. In this way, snoRNAs can also directly influence which proteins are made, as well as their abundance, adding a new level of control in cell biology.

And this is only the tip of the iceberg; the researchers plan to further develop and apply their technology to investigate the ncRNAs in different settings.

“We would like to understand how ncRNAs function during development. We are particularly interested in their role in the formation of neurons. But we will also use our method to discover and map changes in RNA-RNA interactions in the context of human diseases,” says Blencowe.

Abstract of Global Mapping of Human RNA-RNA Interactions

The majority of the human genome is transcribed into non-coding (nc)RNAs that lack known biological functions or else are only partially characterized. Numerous characterized ncRNAs function via base pairing with target RNA sequences to direct their biological activities, which include critical roles in RNA processing, modification, turnover, and translation. To define roles for ncRNAs, we have developed a method enabling the global-scale mapping of RNA-RNA duplexes crosslinked in vivo, “LIGation of interacting RNA followed by high-throughput sequencing” (LIGR-seq). Applying this method in human cells reveals a remarkable landscape of RNA-RNA interactions involving all major classes of ncRNA and mRNA. LIGR-seq data reveal unexpected interactions between small nucleolar (sno)RNAs and mRNAs, including those involving the orphan C/D box snoRNA, SNORD83B, that control steady-state levels of its target mRNAs. LIGR-seq thus represents a powerful approach for illuminating the functions of the myriad of uncharacterized RNAs that act via base-pairing interactions.

Data-sharing vision as facilitated by GA4GH through its working groups (credit: GA4GH)

Sharing genetic information from millions of cancer patients around the world could revolutionize cancer prevention and care, according to a paper in Nature Medicine by the Cancer Task Team of the Global Alliance for Genomics and Health (GA4GH).

Hospitals, laboratories and research facilities around the world hold huge amounts of this data from cancer patients, but it’s currently held in isolated “silos” that don’t talk to each other, according to GA4GH, a partnership between scientists, clinicians, patients, and the IT and Life Sciences industry, involving more than 400 organizations in over 40 countries. GA4GH intends to provide a common framework for the responsible, voluntary and secure sharing of patients’ clinical and genomic data.

A searchable global cancer database

“Imagine if we could create a searchable cancer database that allowed doctors to match patients from different parts of the world with suitable clinical trials,” said GA4GH co-chair professor Mark Lawler, a leading cancer expert from Queen’s University Belfast. “This genetic matchmaking approach would allow us to develop personalized treatments for each individual’s cancer, precisely targeting rogue cells and improving outcomes for patients.

“This data sharing presents logistical, technical, and ethical challenges. Our paper highlights these challenges and proposes potential solutions to allow the sharing of data in a timely, responsible and effective manner. We hope this blueprint will be adopted by researchers around the world and enable a unified global approach to unlocking the value of data for enhanced patient care.”

GA4GH acknowledges that there are security issues, and has created a Security Working Group and a policy paper that documents the standards and implementation practices for protecting the privacy and security of shared genomic and clinical data.

Examples of current initiatives for clinico-genomic data-sharing include the U.S.-based Precision Medicine Initiative and the UK’s 100,000 Genomes Project, both of which have cancer as a major focus.

Professor Lawler is funded by the Medical Research Council and Cancer Research UK.

Abstract of Facilitating a culture of responsible and effective sharing of cancer genome data

Rapid and affordable tumor molecular profiling has led to an explosion of clinical and genomic data poised to enhance the diagnosis, prognostication and treatment of cancer. A critical point has now been reached at which the analysis and storage of annotated clinical and genomic information in unconnected silos will stall the advancement of precision cancer care. Information systems must be harmonized to overcome the multiple technical and logistical barriers to data sharing. Against this backdrop, the Global Alliance for Genomic Health (GA4GH) was established in 2013 to create a common framework that enables responsible, voluntary and secure sharing of clinical and genomic data. This Perspective from the GA4GH Clinical Working Group Cancer Task Team highlights the data-aggregation challenges faced by the field, suggests potential collaborative solutions and describes how GA4GH can catalyze a harmonized data-sharing culture.

An inexpensive portable biosensor developed by researchers at Brazil’s National Nanotechnology Laboratory (credit: LNNano)

A novel nanoscale organic transistor-based biosensor that can detect molecules associated with neurodegenerative diseases and some types of cancer has been developed by researchers at the National Nanotechnology Laboratory (LNNano) in Brazil.

The transistor, mounted on a glass slide, contains the reduced form of the peptide glutathione (GSH), which reacts in a specific way when it comes into contact with the enzyme glutathione S-transferase (GST), linked to Parkinson’s, Alzheimer’s and breast cancer, among other diseases.

“The device can detect such molecules even when they’re present at very low levels in the examined material, thanks to its nanometric sensitivity,” explained Carlos Cesar Bof Bufon, Head of LNNano’s Functional Devices & Systems Lab (DSF).

Bufon said the system can be adapted to detect other substances by replacing the analytes (detection compounds). The team is working on paper-based biosensors to further lower the cost, improve portability, and facilitate fabrication and disposal.

The research is published in the journal Organic Electronics.

Abstract of Water-gated phthalocyanine transistors: Operation and transduction of the peptide–enzyme interaction

The use of aqueous solutions as the gate medium is an attractive strategy to obtain high charge carrier density (1012 cm−2) and low operational voltages (<1 V) in organic transistors. Additionally, it provides a simple and favorable architecture to couple both ionic and electronic domains in a single device, which is crucial for the development of novel technologies in bioelectronics. Here, we demonstrate the operation of transistors containing copper phthalocyanine (CuPc) thin-films gated with water and discuss the charge dynamics at the CuPc/water interface. Without the need for complex multilayer patterning, or the use of surface treatments, water-gated CuPc transistors exhibited low threshold (100 ± 20 mV) and working voltages (<1 V) compared to conventional CuPc transistors, along with similar charge carrier mobilities (1.2 ± 0.2) x 10−3 cm2 V−1 s−1. Several device characteristics such as moderate switching speeds and hysteresis, associated with high capacitances at low frequencies upon bias application (3.4–12 μF cm−2), indicate the occurrence of interfacial ion doping. Finally, water-gated CuPc OTFTs were employed in the transduction of the biospecific interaction between tripeptide reduced glutathione (GSH) and glutathione S-transferase (GST) enzyme, taking advantage of the device sensitivity and multiparametricity.

An atherosclerotic lesion. Such lesions can rupture and cause heart attacks and strokes. (credit: UVA School of Medicine)

University of Virginia School of Medicine have discovered that a gene called Oct4 — which scientific dogma insists is inactive in adults — actually plays a vital role in preventing ruptured atherosclerotic plaques inside blood vessels, the underlying cause of most heart attacks and strokes.

The researchers found that Oct4 controls the conversion of smooth muscle cells into protective fibrous “caps” inside plaques, making the plaques less likely to rupture. They also discovered that the gene promotes many changes in gene expression that are beneficial in stabilizing the plaques. In addition, the researchers believe it may be possible to develop drugs or other therapeutic agents that target the Oct4 pathway as a way to reduce the incidence of heart attacks or stroke.

Could impact many human diseases, regenerative medicine

The researchers are also currently testing Oct4′s possible role in repairing cellular damage and healing wounds, which would make it useful for regenerative medicine.

Oct4 is one of the “stem cell pluripotency factors” described by Shinya Yamanaka, PhD, of Kyoto University, for which he received the 2012 Nobel Prize. His lab and many others have shown that artificial over-expression of Oct4 within somatic cells grown in a lab dish is essential for reprogramming these cells into induced pluripotential stem cells, which can then develop into any cell type in the body or even an entire organism.

“Finding a way to reactivate this pathway may have profound implications for health and aging,” said researcher Gary K. Owens, director of UVA’s Robert M. Berne Cardiovascular Research Center. “This could impact many human diseases and the field of regenerative medicine. [It may also] end up being the ‘fountain-of-youth gene,’ a way to revitalize old and worn-out cells.”

The discovery is described in a paper published online in Nature Medicine. The work was funded by the National Institutes of Health, the Russian Science Foundation, the Russian Federal Agency of Scientific Organization, and the U.S. Department of Defense.

Abstract of Activation of the pluripotency factor OCT4 in smooth muscle cells is atheroprotective

Although somatic cell activation of the embryonic stem cell (ESC) pluripotency factor OCT4 has been reported, this previous work has been controversial and has not demonstrated a functional role for OCT4 in somatic cells. Here we demonstrate that smooth muscle cell (SMC)-specific conditional knockout of Oct4 in Apoe−/− mice resulted in increased lesion size and changes in lesion composition that are consistent with decreased plaque stability, including a thinner fibrous cap, increased necrotic core area, and increased intraplaque hemorrhage. Results of SMC-lineage-tracing studies showed that these effects were probably the result of marked reductions in SMC numbers within lesions and SMC investment within the fibrous cap, which may result from impaired SMC migration. The reactivation of Oct4 within SMCs was associated with hydroxymethylation of the Oct4promoter and was hypoxia inducible factor-1α (HIF-1α, encoded by HIF1A) and Krüppel-like factor-4 (KLF4)-dependent. These results provide the first direct evidence that OCT4 has a functional role in somatic cells, and they highlight the potential role of OCT4 in normal and diseased somatic cells.

Cubimorph is an interactive device made of a chain of reconfigurable modules that shape-shifts into any shape that can be made out of a chain of cubes, such as transforming from a mobile phone to a game console. (credit: Anne Roudaut et al./Proceedings of the ICRA 2016)

British researchers and Google have independently developed revolutionary concepts for Lego-like modular interactive mobile devices.

The British team’s design, called Cubimorph, is constructed of a chain of cubes. It has touchscreens on each of the six module faces and uses a hinge-mounted turntable mechanism to self-reconfigure in the user’s hand. One example: a mobile phone that can transform into a console when a user launches a game.

Ara, launched at Google’s I/O developer conference, uses a frame that contains all the functionality of a smartphone (CPU, GPU, antennas, sensors, battery, and display) plus six flexible slots for easy swapping of modules. “Slide any Ara module into any slot and it just works,” is the concept. Powering this is Greybus, a new bit of software deep in the Android stack that supports instantaneous connections, power efficiency, and data-transfer rates of up to 11.9 Gbps. The Developer Edition will ship in Fall 2016, with a consumer version in 2017.

Google | Ara: What’s next

Abstract of Cubimorph: Designing Modular Interactive Devices

We introduce Cubimorph, a modular interactive device that accommodates touchscreens on each of the six module faces, and that uses a hinge-mounted turntable mechanism to self-reconfigure in the user’s hand. Cubimorph contributes toward the vision of programmable matter where interactive devices reconfigure in any shape that can be made out of a chain of cubes in order to fit a myriad of functionalities, e.g. a mobile phone shifting into a console when a user launches a game. We present a design rationale that exposes user requirements to consider when designing homogeneous modular interactive devices. We present our Cubimorph mechanical design, three prototypes demonstrating key aspects (turntable hinges, embedded touchscreens and miniaturization), and an adaptation of the probabilistic roadmap algorithm for the reconfiguration.

New software developed by Carnegie Mellon University helps mobile robots deal efficiently with clutter, whether it is in the back of a refrigerator or on the surface of the moon. (credit: Carnegie Mellon University Personal Robotics Lab)

Robots are adept at picking up an object in a specified place (such as in a factory assembly line) and putting it down at another specified place (known as “pick-and-place,” or P&P, processes). But homes and other planets, for example, are a special challenge for robots.

When a person reaches for a milk carton in a refrigerator, he doesn’t necessarily move every other item out of the way. Rather, a person might move an item or two, while shoving others out of the way as the carton is pulled out.

Robot creativity

Robot employs a “push and shove” method (credit: Jennifer E. King et al./Proceedings of IEEE International Conference on Robotics and Automation)

In tests, the new “push and shove” algorithm helped a robot deal efficiently with clutter, but surprisingly, it also revealed the robot’s creativity in solving problems.

“It was exploiting sort of superhuman capabilities,” Siddhartha Srinivasa, associate professor of robotics, said of his lab’s two-armed mobile robot, the Home Exploring Robot Butler, or HERB. “The robot’s wrist has a 270-degree range, which led to behaviors we didn’t expect. Sometimes, we’re blinded by our own anthropomorphism.”

In one case, the robot used the crook of its arm to cradle an object to be moved. “We never taught it that,” Srinivasa said.

K-Rex rover prototype (credit: NASA)

The new algorithm was also tested on NASA’s KRex robot, which is being designed to traverse the lunar surface. While HERB focused on clutter typical of a home, KRex used the software to find traversable paths across an obstacle-filled landscape while pushing an object.

A “rearrangement planner” automatically finds a balance between the two strategies (pick-and-place vs. push-and-shove), Srinivasa said, based on the robot’s progress on its task. The robot is programmed to understand the basic physics of its world, so it has some idea of what can be pushed, lifted, or stepped on. And it can be taught to pay attention to items that might be valuable or delicate.

They researchers presented their work last week (May 19) at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden. NASA, the National Science Foundation, Toyota Motor Engineering and Manufacturing, and the Office of Naval Research supported this research.

Abstract of Rearrangement Planning Using Object-Centric and Robot-Centric Action Spaces

This paper addresses the problem of rearrangement planning, i.e. to find a feasible trajectory for a robot that must interact with multiple objects in order to achieve a goal. We propose a planner to solve the rearrangement planning problem by considering two different types of actions: robot-centric and object-centric. Object-centric actions guide the planner to perform specific actions on specific objects. Robot-centric actions move the robot without object relevant intent, easily allowing simultaneous object contact and whole arm interaction. We formulate a hybrid planner that uses both action types. We evaluate the planner on tasks for a mobile robot and a household manipulator.

Researchers at Washington State University are using ideas from animal training to help non-expert users teach robots how to do desired tasks.

As robots become more pervasive in society, humans will want them to do chores like cleaning house or cooking. But to get a robot started on a task, people who aren’t computer programmers will have to give it instructions. “So we needed to provide a way for everyone to train robots, without programming,” said Matthew Taylor, Allred Distinguished Professor in the WSU School of Electrical Engineering and Computer Science.

User feedback improves robot performance

With Bei Peng, a doctoral student in computer science, and collaborators at Brown University and North Carolina State University, Taylor designed a computer program that lets humans without programming expertise teach a virtual robot that resembles a dog in WSU’s Intelligent Robot Learning Laboratory.

For the study, the researchers varied the speed at which their virtual dog reacted. As when somebody is teaching a new skill to a real animal, the slower movements let the trainer know that the virtual dog was unsure of how to behave, so trainers could provide clearer guidance to help the robot learn better.

The researchers have begun working with physical robots as well as virtual ones. They also hope to eventually also use the program to help people learn to be more effective animal trainers.

The researchers recently presented their work at the international Autonomous Agents and Multiagent Systems conference, a scientific gathering for agents and robotics research. Funding for the project came from a National Science Foundation grant.

As robots become pervasive in human environments, it is important to enable users to effectively convey new skills without programming. Most existing work on Interactive Reinforcement Learning focuses on interpreting and incorporating non-expert human feedback to speed up learning; we aim to design a better representation of the learning agent that is able to elicit more natural and effective communication between the human trainer and the learner, while treating human feedback as discrete communication that depends probabilistically on the trainer’s target policy. This work entails a user study where participants train a virtual agent to accomplish tasks by giving reward and/or punishment in a variety of simulated environments. We present results from 60 participants to show how a learner can ground natural language commands and adapt its action execution speed to learn more efficiently from human trainers. The agent’s action execution speed can be successfully modulated to encourage more explicit feedback from a human trainer in areas of the state space where there is high uncertainty. Our results show that our novel adaptive speed agent dominates different fixed speed agents on several measures of performance. Additionally, we investigate the impact of instructions on user performance and user preference in training conditions.

Penn State researchers have developed a flexible electronic material that self-heals to restore multiple functions, even after repeated breaks. (Top row) The material is cut in half, then reattached. After healing for 30 minutes, the material is still able to be stretched and hold weight. (credit: Qing Wang, Penn State)

A new electronic material created by an international team headed by Penn State scientists can heal all its functions automatically, even after breaking multiple times. The new material could improve the durability of wearable electronics.

Electronic materials have been a major stumbling block for the advance of flexible electronics because existing materials do not function well after breaking and healing.

“Wearable and bendable electronics are subject to mechanical deformation over time, which could destroy or break them,” said Qing Wang, professor of materials science and engineering, Penn State. “We wanted to find an electronic material that would repair itself to restore all of its functionality, and do so after multiple breaks.”

In the past, researchers have been able to create self-healable materials (such as these, covered on KurzweilAI) that can restore a single function after breaking. But restoring a suite of functions is critical for creating effective wearable electronics. For example, if a dielectric material retains its electrical resistivity after self-healing but not its thermal conductivity, that could put electronics at risk of overheating.

The material that Wang and his team created restores all properties needed for use as a dielectric in wearable electronics — mechanical strength, breakdown strength to protect against surges, electrical resistivity, thermal conductivity, and dielectric (insulating) properties. They published their findings online in Advanced Functional Materials.

“Most research into self-healable electronic materials has focused on electrical conductivity but dielectrics have been overlooked,” said Wang. “We need conducting elements in circuits but we also need insulation and protection for microelectronics.” Most self-healable materials are also soft or “gum-like,” said Wang, but the material he and his colleagues created is very tough in comparison.

His team added boron nitride nanosheets to a base material of plastic polymer. The material is able to self-heal because boron nitride nanosheets connect to one another with hydrogen bonding groups functionalized onto their surface. When two pieces are placed in close proximity, the electrostatic attraction naturally occurring between both bonding elements draws them close together. When the hydrogen bond is restored, the two pieces are “healed.”

Depending on the percentage of boron nitride nanosheets added to the polymer, this self-healing may require additional heat or pressure, but some forms of the new material can self-heal at room temperature when placed next to each other.

Unlike other healable materials that use hydrogen bonds, boron nitride nanosheets are impermeable to moisture. This means that devices using this dielectric material can operate effectively within high humidity contexts such as in a shower or at a beach.

“This is the first time that a self-healable material has been created that can restore multiple properties over multiple breaks, and we see this being useful across many applications,” said Wang.

Harbin Institute of Technology researches also collaborated on this research, which was supported by the China Scholarship Council.

The continuous evolution toward electronics with high power densities and integrated circuits with smaller feature sizes and faster speeds places high demands on a set of material properties, namely, the electrical, thermal, and mechanical properties of polymer dielectrics. Herein, a supramolecular approach is described to self-healable polymer nanocomposites that are mechanically robust and capable of restoring simultaneously structural, electrical, dielectric, and thermal transport properties after multiple fractures. With the incorporation of surface-functionalized boron nitride nanosheets, the polymer nanocomposites exhibit many desirable features as dielectric materials such as higher breakdown strength, larger electrical resistivity, improved thermal conductivity, greater mechanical strength, and much stabilized dielectric properties when compared to the pristine polymer. It is found that the recovery condition has remained the same during sequential cycles of cutting and healing, therefore suggesting no aging of the polymer nanocomposites with mechanical breakdown. Moreover, moisture has a minimal effect on the healing and dielectric properties of the polymer nanocomposites, which is in stark contrast to what is typically observed in the hydrogen-bonded supramolecular structures.

Stanford University School of Engineering | This easy-to-assemble black box is part of an experimental urinalysis testing system designed by Stanford engineers. The black box is meant to enable a smartphone camera to capture video that accurately analyzes color changes in a standard paper dipstick to detect conditions of medical interest.

Two Stanford University electrical engineers have designed a simple new low-cost, portable urinalysis device that could allow patients to get consistently accurate urine test results at home.

The system uses a black box and smartphone camera to analyze a standard color-changing paper test, using a medical dipstick dipped into the urine specimen, to measure levels of glucose, blood, protein, and other chemicals — which can indicate evidence of kidney disease, diabetes, urinary tract infections and even signs of bladder cancer.

The current standard dipstick test uses a paper strip with 10 square pads. Dipped in a sample, each pad changes color to screen for the presence of a different disease-indicating chemical. After waiting the appropriate amount of time, a medical professional — or, increasingly, an automated system — compares the pad shades to a color reference chart for results.

But the test takes time, costs money, and creates backlogs for clinics and primary care physicians. The results are often inconclusive, requiring both patient and doctor to book another appointment. So patients with long-term conditions like chronic urinary tract infections must wait for results to confirm what both patient and doctor already know before getting antibiotics. Tracking patients’ progress with multiple urine tests a day is out of the question.

Some innovators have tried to create simple, do-it-yourself systems, but they can be error-prone, said Audrey (Ellerbee) Bowden, assistant professor of electrical engineering at Stanford. “You think it’s easy — you just dip the stick in urine and look for the color change, but there are things that can go wrong,” she said. “Doctors don’t end up trusting those results as accurate.”

Writing in Lab on a Chip, a journal of the Royal Society of Chemistry, Bowden and Gennifer Smith, a PhD student in electrical engineering, explain they designed their system to overcome three main potential errors in a home test: inconsistent lighting, urine volume control, and timing.

To fix this, the engineers designed a multi-layered system to load urine onto the dipstick. A dropper squeezes urine into a hole in the first layer, filling up a channel in the second layer, and ten square holes in the third layer. Some clever engineering ensures that a uniform volume of urine is deposited on each of the ten pads on the dipstick at just the right time.

Finally, a smartphone is placed on top of the black box with the video camera focused on the dipstick inside the box. Custom software reads video from the smartphone and controls the timing and color analysis.

To perform the test a person would load the urine and then push the third layer into the box. When the third layer hits the back of the box, it signals the phone to begin the video recording at the precise moment when the urine is deposited on the pads.

Timing is critical to the analysis. Pads have readout times ranging from 30 seconds to 2 minutes. Once the two minutes are up, the person can transfer the recording to a software program on their computer. For each pad, it pulls out the frames from the correct time and reads out the results.

The engineers also plan to design an app to send the results directly to the doctor.

Funding for this research came from the National Institutes of Health, the Rose Hills Foundation Graduate Engineering Fellowship, the Electrical Engineering Department New Projects Graduate Fellowship, the Oswald G. Villard Jr. Engineering Fellowship, the Stanford Graduate Fellowship and the National Science Foundation Graduate Research Fellowship.

We introduce a novel manifold and companion software for dipstick urinalysis that eliminate many of the aspects that are traditionally plagued by user error: precise sample delivery, accurate readout timing, and controlled lighting conditions. The proposed all-acrylic slipping manifold is reusable, reliable, and low in cost. A simple timing mechanism ensures results are read out at the appropriate time. Results are obtained by capturing videos using a mobile phone and by analyzing them using custom-designed software. We show that the results obtained with the proposed device are as accurate and consistent as a properly executed dip-and-wipe method, the industry gold-standard, suggesting the potential for this strategy to enable confident urinalysis testing in home environments.

For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell using a relatively new memory technology known as phase-change memory (PCM). In this photo, the experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip, serving as a characterization vehicle in 90 nm CMOS baseline technology. (credit: IBM Research)

Scientists at IBM Research have demonstrated — for the first time (today, May 17), at the IEEE International Memory Workshop in Paris — reliably storing 3 bits of data per cell in a 64k-cell array in a memory chip*, using a relatively new memory technology known as phase-change memory (PCM). Previously, scientists at IBM and elsewhere successfully demonstrated the ability to store only 1 bit per cell in PCM.

The current memory landscape includes DRAM, hard disk drives, and flash. But in the last several years, PCM has attracted the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.

IBM suggests this research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things.

Scientists have long been searching for a universal, non-volatile memory technology with performance far superior to flash — today’s most ubiquitous non-volatile memory technology. The benefits of such a memory technology would allow computers and servers to boot instantaneously and would significantly enhance the overall performance of IT systems. A promising contender is PCM, which can write and retrieve data 100 times faster than Flash and enable high storage capacities, and like flash, not lose data when the power is turned off. Unlike flash, PCM is also very durable and can endure at least 10 million write cycles, compared to current enterprise-class flash at 30,000 cycles or consumer-class flash at 3,000 cycles. While 3,000 cycles will outlive many consumer devices, 30,000 cycles are orders of magnitude too low to be suitable for enterprise applications. (credit: IBM Research)

IBM scientists envision standalone PCM as well as hybrid applications that combine PCM and flash storage, with PCM as an extremely fast cache. For example, a mobile phone’s operating system could be stored in PCM, enabling the phone to launch in a few seconds. In the enterprise space, entire databases could be stored in PCM for blazing fast query processing in time-critical online applications, such as financial transactions.

Machine learning algorithms using large datasets will also see a speed boost by reducing the latency overhead when reading the data between iterations.

How PCM Works: answering the grand challenge of combining properties of DRAM and flash

To store a “0″ or a “1″ bit on a PCM cell, a high or medium electrical current is applied to the material. A “0″ can be programmed to be written in the amorphous phase and a “1″ in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied. This is how re-writable (but slower) Blu-ray discs** store videos.

Phase-change memory (PCM) is one of the most promising candidates for next-generation non-volatile memory technology. The cross-sectional tunneling electron microscopy (TEM) image of a mushroom-type PCM cell is shown in this photo. The cell consists of a layer of phase-change material, such as germanium antimony telluride (GST), sandwiched between a bottom and a top electrode. In this architecture, the bottom electrode has a radius (denoted as rE ) of approx. 15 nm and is fabricated by sub-lithographic means. The top electrode has a radius in excess of 100 nm and the thickness of the phase change layer is approx. 100 nm. A transistor or a diode is typically employed as the access device. (credit: IBM Research — Zurich)

“Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” said Haris Pozidis, PhD., an author of the workshop paper and the manager of non-volatile memory research at IBM Research–Zurich. “Reaching 3 bits per cell is a significant milestone because at this density, the cost of PCM will be significantly less than DRAM and closer to flash.”

To achieve multi-bit storage, IBM scientists have developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes***. “Combined, these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity, and endurance cycling,” said IBM Fellow Evangelos Eleftheriou, PhD.

IBM scientists have also demonstrated, for the first time, phase-change memory attached to POWER8-based servers.

*** More specifically, the new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. To provide additional robustness of the stored data in a cell over ambient temperature fluctuations, a novel coding and detection scheme is employed. This scheme adaptively modifies the level thresholds that are used to detect the cell’s stored data so that they follow variations due to temperature change. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a four-bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.

IBM Research | IBM Scientists Achieve Storage Memory Breakthrough

Abstract of Multilevel-Cell Phase Change Memory: A Viable Technology

In order for any non-volatile memory (NVM) to be considered a viable technology, its reliability should be verified at the array level. In particular, properties such as high endurance and at least moderate data retention are considered essential. Phase-change memory (PCM) is one such NVM technology that possesses highly desirable features and has reached an advanced level of maturity through intensive research and development in the past decade. Multilevel-cell (MLC) capability, i.e., storage of two bits per cell or more, is not only desirable as it reduces the effective cost per storage capacity, but a necessary feature for the competitiveness of PCM against the incumbent technologies, namely DRAM and Flash memory. MLC storage in PCM, however, is seriously challenged by phenomena such as cell variability, intrinsic noise, and resistance drift. We present a collection of advanced circuit-level solutions to the above challenges, and demonstrate the viability of MLC PCM at the array level. Notably, we demonstrate reliable storage and moderate data retention of 2 bits/cell PCM, on a 64 k cell array, at elevated temperatures and after 1 million SET/RESET endurance cycles. Under similar operating conditions, we also show feasibility of 3 bits/cell PCM, for the first time ever.

Inhibition of media septum GABA neurons during rapid eye movement (REM) sleep reduces theta rhythm (a characteristic of REM sleep). Schematic of the in vivo recording configuration: an optic fiber delivered orange laser light to the media septum part of the brain, allowing for optogenetic inhibition of media septum GABA neurons while recording the local field potential signal from electrodes implanted in hippocampus area CA1. (credit: Richard Boyce et al./Science)

A study published in the journal Science by researchers at the Douglas Mental Health University Institute at McGill University and the University of Bern provides the first evidence that rapid eye movement (REM) sleep — the phase where dreams appear — is directly involved in memory formation (at least in mice).

“We already knew that newly acquired information is stored into different types of memories, spatial or emotional, before being consolidated or integrated,” says Sylvain Williams, a researcher and professor of psychiatry at McGill*. “How the brain performs this process has remained unclear until now. We were able to prove for the first time that REM sleep (dreaming) is indeed critical for normal spatial memory formation in mice,” said Williams.

Dream quest

Hundreds of previous studies have tried unsuccessfully to isolate neural activity during REM sleep using traditional experimental methods. In this new study, the researchers instead used optogenetics, which enables scientists to precisely target a population of neurons and control its activity by light.

“We chose to target [GABA neurons in the media septum] that regulate the activity of the hippocampus, a structure that is critical for memory formation during wakefulness and is known as the ‘GPS system’ of the brain,” Williams says.

To test the long-term spatial memory of mice, the scientists trained the rodents to spot a new object placed in a controlled environment where two objects of similar shape and volume stand. Spontaneously, mice spend more time exploring a novel object than a familiar one, showing their use of learning and recall.

When these mice were in REM sleep, however, the researchers used light pulses to turn off their memory-associated neurons to determine if it affects their memory consolidation. The next day, the same rodents did not succeed the spatial memory task learned on the previous day. Compared to the control group, their memory seemed erased, or at least impaired.

“Silencing the same neurons for similar durations outside of REM episodes had no effect on memory. This indicates that neuronal activity specifically during REM sleep is required for normal memory consolidation,” says the study’s lead author, Richard Boyce, a PhD student.

Implications for brain disease

REM sleep is understood to be a critical component of sleep in all mammals, including humans. Poor sleep quality is increasingly associated with the onset of various brain disorders such as Alzheimer’s and Parkinson’s disease.

In particular, REM sleep is often significantly perturbed in Alzheimer’s diseases (AD), and results from this study suggest that disruption of REM sleep may contribute directly to memory impairments observed in AD, the researchers say.

This work was partly funded by the Canadian Institutes of Health Research (CIHR), the Natural Science and Engineering Research Council of Canada (NSERC), a postdoctoral fellowship from Fonds de la recherche en Santé du Québec (FRSQ) and an Alexander Graham Bell Canada Graduate scholarship (NSERC).

* Williams’ team is also part of the CIUSSS de l’Ouest-de-l’Île-de-Montréal research network. Williams co-authored the study with Antoine Adamantidis, a researcher at the University of Bern’s Department of Clinical Research and at the Sleep Wake Epilepsy Center of the Bern University Hospital.

Abstract of Causal evidence for the role of REM sleep theta rhythm in contextual memory consolidation

Rapid eye movement sleep (REMS) has been linked with spatial and emotional memory consolidation. However, establishing direct causality between neural activity during REMS and memory consolidation has proven difficult because of the transient nature of REMS and significant caveats associated with REMS deprivation techniques. In mice, we optogenetically silenced medial septum γ-aminobutyric acid–releasing (MSGABA) neurons, allowing for temporally precise attenuation of the memory-associated theta rhythm during REMS without disturbing sleeping behavior. REMS-specific optogenetic silencing of MSGABA neurons selectively during a REMS critical window after learning erased subsequent novel object place recognition and impaired fear-conditioned contextual memory. Silencing MSGABA neurons for similar durations outside REMS episodes had no effect on memory. These results demonstrate that MSGABA neuronal activity specifically during REMS is required for normal memory consolidation.

The experiment, featuring the small red glow of a BEC trapped in infrared laser beams (credit: Stuart Hay, ANU)

Australian physicists have used an online optimization process based on machine learning to produce effective Bose-Einstein condensates (BECs) in a fraction of the time it would normally take the researchers.

A BEC is a state of matter of a dilute gas of atoms trapped in a laser beam and cooled to temperatures just above absolute zero. BECs are extremely sensitive to external disturbances, which makes them ideal for research into quantum phenomena or for making very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The experiment, developed by physicists from ANU, University of Adelaide and UNSW ADFA, demonstrated that “machine-learning online optimization” can discover optimized condensation methods “with less experiments than a competing optimization method and provide insight into which parameters are important in achieving condensation,” the physicists explain in an open-access paper in the Nature group journal Scientific Reports.

The team cooled the gas to around 5 microkelvin. To further cool down the trapped gas (containing about 40 million rubidium atoms) to on the order of nanokelvin*, they then handed control of the three laser beams** over to the machine-learning program.

The physicists were surprised by the clever methods the system came up with to create a BEC, like changing one laser’s power up and down, and compensating with another laser.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from ANU Research School of Physics and Engineering. “A simple computer program would have taken longer than the age of the universe to run through all the combinations and work this out.”

Wigley suggested that one could make a working device to measure gravity that you could take in the back of a car, and the AI would automatically recalibrate and fix itself.

“It’s cheaper than taking a physicist everywhere with you,” he said.

* Billionth of a degree above absolute zero — where a phase transition occurs, and a macroscopic number of atoms start to occupy the same quantum state, called Bose-Einstein condensation.

** The 1064 nm beam is controlled by varying the current to the laser, while the 1090 nm beam is controlled using the current and a waveplate rotation stage combined with a polarizing beamsplitter to provide additional power attenuation while maintaining mode stability.

We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.

An “origami robot” unfolds itself from an ingestible capsule. It could be used by a physician to perform a remote-controlled operation (credit: Melanie Gonick/MIT)

MIT researchers and associates have developed a tiny “origami robot” that can unfold itself from a swallowed capsule and, steered by a physician via an external magnetic field, crawl across the stomach wall to operate on a patient. For example, it can remove a swallowed button battery or patch a wound.

System for remote-controlled clinical procedures via origami-based robot. A patient swallows an iced capsule, which melts when it reaches the stomach, releasing the robot from a folded origami structure. To remove a foreign body (such as a button battery), the physician controls the robot from outside the body via a magnetic field that affects the magnet inside the delivery structure, allowing the robot to push the foreign body into the GI system. The robot can also treat an inflammation by releasing a drug contained in the delivery structure. (credit: Shuhei Miyashita et al./ICRA Proceedings)

Every year, 3,500 swallowed button batteries are reported in the U.S. alone. Frequently, the batteries are digested normally, but if they come into prolonged contact with the tissue of the esophagus or stomach, they can cause an electric current that produces hydroxide, which burns the tissue.

The researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology presented the work last week at the International Conference on Robotics and Automation. The design built on previous work (see related links below) from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science. The new robot is a successor to one reported at this conference last year, with an improved design, tested in a pig stomach.

Rus and the team plan further developments, including the robot’s ability to perform procedures without physician remote control.

Developing miniature robots that can carry out versatile clinical procedures inside the body under the remote instructions of medical professionals has been a long time challenge. In this paper, we present origami-based robots that can be ingested into the stomach, locomote to a desired location, remove a foreign body, deliver drugs, and biodegrade. We designed and fabricated composite material sheets for a biocompatible and biodegradable robot that can be encapsulated in ice for delivery through the esophagus, embed a drug layer that is passively released to a wounded area, and be remotely controlled to carry out underwater maneuvers specific to the tasks using magnetic fields. The performances of the robots are demonstrated in a simulated physical environment consisting of an esophagus and stomach with properties similar to the biological organs.

The White House Office of Science and Technology Policy has announced plans to co-host four public workshops to spur public dialogue on artificial intelligence and machine learning, and to learn more about the benefits and risks of artificial intelligence, according to Ed Felten, a Deputy U.S. Chief Technology Officer.

These four workshops will be co-hosted by academic and non-profit organizations; two will also be co-hosted by the National Economic Council, with a public report later this year. They will be livestreamed:

The Federal Government also is “working to leverage AI for public good and toward a more effective government.” A new National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence will monitor state-of-the-art advances and technology milestones in artificial intelligence and machine learning within the Federal Government, in the private sector, and internationally; and help coordinate Federal activity in this space.

The NSTC group also hopes to increase the use of AI and machine learning to improve the delivery of government services, especially in areas related to urban systems and smart cities, mental and physical health, social welfare, criminal justice, and the environment.

A simulation of Brownian motion (random walk) of a dust particle (yellow) that collides with a large set of smaller particles (molecules of a gas) moving with different velocities in different random directions (credit: Lookang et al./CC)

Researchers at the Universities of Bristol and Western Australia have demonstrated a practical use of a “primitive” quantum computer, using an algorithm known as “quantum walk.” They showed that a two-qubit photonics quantum processor can outperform classical computers for this type of algorithm, without requiring more sophisticated quantum computers, such as IBM’s five-qubits cloud-based quantum processor (see IBM makes quantum computing available free on IBM Cloud).

Quantum walk is the quantum-mechanical analog of “random-walk” models such as Brownian motion (for example, the random motion of a dust particle in air). The researchers implemented “continuous-time quantum walk” computations on circulant graphs* in a proof-of-principle experiment.

The probability distribution of quantum walk on an example circulant graph. Sampling this probability distribution is generally hard for a classical computer, but simple on a primitive quantum computer. (credit: University of Bristol)

Jonathan Matthews, PhD., EPSRC Early Career Fellow and Lecturer in the School of Physics and the Centre for Quantum Photonics, explained in an open-access paper in Nature Communications: “An exciting outcome of our work is that we may have found a new example of quantum walk physics that we can observe with a primitive quantum computer, that otherwise a classical computer could not see. These otherwise hidden properties have practical use, perhaps in helping to design more sophisticated quantum computers.”

Microsoft | Quantum Computing 101

* A circulant graph is a graph where every vertex is connected to the same set of relative vertices, as explained in an open-access paper by Salisbury University student Shealyn Tucker, including a practical example of the use of a circulant graph:

Example of a circulent graph depicting how products should be optimally collocated based on which products customers buy at a grocery store (credit: Shealyn Tucker/Salisbury University)

Abstract of Efficient quantum walk on a quantum processor

The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor.

Moogfest 2016, a four-day, mind-expanding festival on the synthesis of technology, art, and music, will happen this coming week (Thursday, May 19 to Sunday, May 22) near Duke University in Durham, North Carolina, with more than 300 musical performances, workshops, conversations, masterclasses, film screenings, live scores, sound installations, multiple interactive art experiences, and “The Future of Creativity” keynotes by visionary futurist Martine Rothblatt, PhD. and virtual reality pioneer and author Jaron Lanier.

Cyborg activist Neil Harbisson is the first person in the world with an antenna implanted in his skull, allowing him to hear the frequencies of colors (including infrared and ultraviolet) via bone conduction and receive phone calls. (credit: N. Harbisson)

By day, Moogfest unfolds in venues throughout downtown Durham in spaces that range from intimate galleries and experimental art installations to grand theaters as a platform for geeky exploration and experimentation in sessions and workshops, featuring more than 250 innovators in music, art, and technology, including avant-garde pioneers such as cyborg Neil Harbisson, technoshaman paleo-ecologist/multimedia performer Michael Garfield on “Technoshamanism: A Very Psychedelic Century,” sonifying plants with Data Garden, the Google Magenta (Deep Dream Generator) on training neural networks to generate music, Onyx Ashanti showing how to program music with your mind, Google Doodle’s Ryan Germick, and cyborg artist Moon Ribas, whose cybernetic implants in her arms perceive the movement of real-time earthquakes.

Modular Marketplace 2014 (credit: PatrickPKPR)

Among the fun experimental venues will be the musical Rube Goldberg workshop, the Global Synthesizer Project (an interactive electronic musical instrument installation where users can synthesize environmental sounds from around the world), THETA (a guided meditation virtual reality spa), WiFi Whisperer (an art installation that visually displays signals around us), the Musical Playground, and Modular Marketplace, an interactive exhibition showcasing the latest and greatest from a lineup of Moog Music and other innovative instrument makers and where the public can engage with new musical devices and their designers; free and open to the public, at the American Tobacco Campus at 318 Blackwell Street from 10am–6pm from May 19–22.

INSTRUMENT 1 from Artiphon will make its public debut at Moogfest 2016. It allows users of any skill or style to strum a guitar, tap a piano, bow a violin, or loop a drum beat — all on a single interface. By connecting to iOS devices, Macs and PCs, this portable musical tool can make any sound imaginable.

By night, Moogfest will present cutting-edge music in venues throughout the city. Performing artists include pioneers in electronic music like Laurie Anderson and legendary synth pioneer Suzanne Ciani, alongside pop and avant-garde experimentalists of today, including Grimes, Explosions in the Sky, Oneohtrix Point Never, Alessandro Cortini, Daniel Lanois, Tim Hecker, Arthur Russell Instrumentals, Rival Consoles, and Dawn of Midi.

Durham’s historic Armory is transformed into a dark and body-thumping dance club to host the best of electronica, house, disco and techno. Godfathers of the genre include The Orb, DJ Harvey, and Robert Hood alongside inspiring new acts such as Bicep (debuting their live show), The Black Madonna and a Ryan Hemsworth curated night including Jlin, Qrion and UVBoi.

“The liberation of LGBTQ+ people is wired into the original components of electronic music culture…” — Artists’ statement here

Local favorite Pinhook features a wide range of experimental sounds: heavy techno from Kyle Hall, Paula Temple and Karen Gwyer, live experimentation from Via App, Patricia, M. Geddes Gengras and Julia Holter, jaggedly rhythmic futurists Rabit and Lotic, and the avante-garde doom metal of The Body.

Moogfest’s largest venue, Motorco Park, is a mix of future-forward electro-pop and R&B with performances by ODESZA, Blood Orange, critically- acclaimed emerging artist DAWN (Dawn Richard) playing her first NC show, he kickoff of Miike Snow’s U.S. Tour, Gary Numan, Silver Apples, Mykki Blanco and newly announced The Range as well as a distinguished hip hop lineup that includes GZA, Skepta, Torey Lanez, Daye Jack, Denzel Curry, Lunice and local artists King Mez, Professor Toon and Well$.

Since 2004, Moogfest has brought together artists, futurist thinkers, inventors, entrepreneurs, designers, engineers, scientists, and musicians. Moogfest is a tribute to Dr. Robert “Bob” Moog and the profound influence his inventions have had on how we hear the world. Over the last sixty years, Bob Moog and Moog Music have pioneered the analog synthesizer and other technology tools for artists. He was vice president for new product research at Kurzweil Music Systems from 1984 to 1988.