Kurzweil AI

Working prototype of wearable artificial kidney developed by Victor Gura, MD, and his team (credit: Stephen Brashear/University of Washington)

An FDA-approved exploratory clinical trial of a prototype wearable artificial kidney (WAK) — a miniaturized, wearable hemodialysis machine — at the University of Washington Medical Center in Seattle has been completed, the researchers reported June 2 in an open-access paper in JCI Insight.

The seven patients enrolled in the study reported “significantly greater treatment satisfaction during the WAK treatment period compared with ratings of care during periods of conventional in-center hemodialysis treatment,” according to the researchers.

“During the study, hemodynamic parameters remained stable, ultrafiltration was achieved as intended, and there were no unexpected adverse treatment effects.” The study was led by the device inventor, Victor Gura, M.D., of Cedars-Sinai Medical Center in Los Angeles and Blood Purification Technologies Inc.

The trial was stopped after the seventh subject due to device-related technical problems, including excessive carbon dioxide bubbles in the dialysate circuit and variable blood and dialysate flows, which the scientists plan to fix.

More than 2 million people worldwide experience end-stage renal disease (ESRD), which is currently treated with hemodialysis therapies that require patients to adhere to restrictive dietary and fluid intake limitations and are associated with a high pill burden, according to the researchers. Adjusted rates of all-cause mortality are up to 8 times greater for dialysis patients compared with age-matched individuals in the general population, they note.

The WAK is designed to be worn and used by patients for up to 24 hours per day. The hope is that treatment can be administered at the patients’ homes either by the patients themselves or caretakers. Being able to be ambulatory while undergoing dialysis, if further proven in additional studies, “would liberate patients from the need to be tethered to a stationary machine during dialysis treatments,” according to the researchers.

(credit: Blood Purification Technologies Inc.)

The researchers caution that “long-term safety of continuous treatment with the WAK has not been established yet. Longer-term studies treating patients in the outpatient and home environment are necessary to address safety issues during ambulation and the home operation of the device by patients and to incorporate additional human factor elements.”

MIT researchers have developed synthetic biological circuits (from bacteria, for example, as shown here) that combine analog and digital computation as “living therapeutics” to treat major diseases and rare genetic disorders (credit: Synlogic)

MIT researchers have developed synthetic biological circuits that combine both analog (continuous) and digital (discrete) computation — allowing living cells to carry out complex processing operations, such as releasing a drug in response to low glucose levels.

The research is presented in an open-access paper published in the journal Nature Communications.

Background: analog vs. digital biological circuits

Like electronic circuits, living cells are capable of performing computations that are either continuous (analog) — like the way eyes adjust to gradual changes in the light levels — or digital, involving simple discrete on or off processes, such as a cell’s self-programmed death (apoptosis). Current synthetic biological systems, in contrast, have tended to focus on either analog or digital processing, limiting the range of uses.

Two basic logic circuits. An AND gate fires only if both inputs are “true” (for example, both inputs have a 1 volt signal, not zero). An OR gate fires if either (or both) of the inputs is true (for example, the top input has a 1 volt signal and the bottom input has zero. (credit: KurzweilAI)

Digital systems are based on a simple binary output, such as 0 or 1, so performing complex computational operations requires the use of a large number of parts (such as AND and OR logic gates) to make the decision, which is difficult to achieve in synthetic biological systems. (There are seven basic logic gates: AND, OR, XOR, NOT, NAND, NOR, and XNOR, as explained here.)

Using genes (instead of voltages), synthetic biologists design genetic circuits (arrangements of DNA components) that can perform new functions. For example, here’s a gene circuit that was constructed using Escherichia coli bacteria (source:Imperial College London/Nature Communications study):

Example of a biological AND gate. Two environment-responsive gene promoters (a region of DNA that initiates transcription — that is, copying a particular segment of DNA into RNA), P1 and P2, act as the inputs to drive the transcriptions of hrpR and hrpS genes, and respond to small molecules. Transcription of the output promoter gfp is turned on only when both proteins HrpR and HrpS are present. (credit: Baojun Wang et al./Nature Communications)

“Most of the work in synthetic biology has focused on the digital approach, because [digital systems] are much easier to program,” says Timothy Lu, an associate professor of electrical engineering and computer science and of biological engineering, and head of the Synthetic Biology Group at MIT’s Research Laboratory of Electronics.

The new synthetic circuits can measure the level of an analog input, such as a particular chemical relevant to a disease, and then make a binary decision — for example, turning on an output, such as a drug that treats the disease if the level is in the right range.

The new circuits are based on multiple elements. For example, a threshold module consists of a sensor that detects analog levels of a particular chemical, which controls the expression of the second digital component, a recombinase gene, which can then switch on or off a segment of DNA by converting it into a digital (on or off) output. (This conversion process is similar to electronic devices known as comparators, which take analog input signals and convert them into a digital output.)

If the concentration of the chemical reaches a certain level, the threshold module expresses the recombinase gene, causing it to flip the DNA segment (which contains a gene or gene-regulatory element, which then alters the expression of a desired output).

“So this is how we take an analog input, such as a concentration of a chemical, and convert it into a 0 or 1 signal,” Lu says. “And once that is done, and you have a piece of DNA that can be flipped upside down, then you can put together any of those pieces of DNA to perform digital computing,” he says.

Ternary logic for three-way glucose decisions

The team has also built an analog-to-digital converter circuit that implements ternary (three-valued) logic. The circuit, which is capable of producing two different outputs, will only switch on in response to either a high or low concentration range of an input.

In the future, the circuit could be used to detect glucose levels in the blood and respond in one of three ways depending on the concentration, he says. “If the glucose level was too high, you might want your cells to produce insulin, if the glucose was too low you might want them to make glucagon, and if it was in the middle you wouldn’t want them to do anything,” he says.

Similar analog-to-digital converter circuits could also be used to detect a variety of chemicals, simply by changing the sensor, Lu says.

Detecting inflammation and environmental conditions

The researchers are investigating the idea of using analog-to-digital converters to detect levels of inflammation in the gut caused by inflammatory bowel disease, for example, and releasing different amounts of a drug in response.

Immune cells used in cancer treatment could also be engineered to detect different environmental inputs, such as oxygen or tumor lysis (cell breakdown) levels, and vary the immune-call therapeutic activity in response.

Other research groups are also interested in using the devices for environmental applications, such as engineering cells that detect concentrations of water pollutants, Lu says.

The research team recently created a spinout company, called Synlogic, which is now attempting to use simple versions of the circuits to engineer probiotic bacteria that can treat diseases in the gut. The company hopes to begin clinical trials of these bacteria-based treatments within the next 12 months.

Abstract of Synthetic mixed-signal computation in living cells

Living cells implement complex computations on the continuous environmental signals that they encounter. These computations involve both analogue- and digital-like processing of signals to give rise to complex developmental programs, context-dependent behaviours and homeostatic activities. In contrast to natural biological systems, synthetic biological systems have largely focused on either digital or analogue computation separately. Here we integrate analogue and digital computation to implement complex hybrid synthetic genetic programs in living cells. We present a framework for building comparator gene circuits to digitize analogue inputs based on different thresholds. We then demonstrate that comparators can be predictably composed together to build band-pass filters, ternary logic systems and multi-level analogue-to-digital converters. In addition, we interface these analogue-to-digital circuits with other digital gene circuits to enable concentration-dependent logic. We expect that this hybrid computational paradigm will enable new industrial, diagnostic and therapeutic applications with engineered cells.

The macaw has a brain the size of an unshelled walnut, compared to the macaque monkey’s lemon-sized brain. But the macaw has more neurons in its forebrain — the portion of the brain associated with intelligent behavior — than the macaque. (credit: Vanderbilt University)

The first study to systematically measure the number of neurons in the brains of more than two dozen species of birds has found that the birds that were studied consistently have more neurons packed into their small brains than those in mammalian or even primate brains of the same mass.

The study results were published online in an open-access paper in the Proceedings of the National Academy of Sciences early edition on the week of June 13.

Graphic summary of the results of the avian brain study (credit: Pavel Nemec, Charles University of Prague)

“For a long time having a ‘bird brain’ was considered to be a bad thing. Now it turns out that it should be a compliment,” said Vanderbilt University neuroscientist Suzana Herculano-Houzel, senior author of the paper with Pavel Němec at the Charles University in Prague.

The study answers a puzzle that comparative neuroanatomists have been wrestling with for more than a decade: How can birds with their small brains perform complicated cognitive behaviors?

The conundrum was created by a series of studies beginning in the previous decade that directly compared the cognitive abilities of parrots and crows with those of primates. The studies found that the birds could manufacture and use tools, use insight to solve problems, make inferences about cause-effect relationships, recognize themselves in a mirror, and plan for future needs, among other cognitive skills previously considered the exclusive domain of primates.

The collection of avian brains that the scientists analyzed. For each species, the total number of neurons (in millions) in each brain is shown in yellow, the number of neurons (in millions) in the forebrain (pallium) is shown in blue and the brain mass (in grams) is shown in red. The scale bar in the lower right is 10 mm. (credit: Suzana Herculano-Houzel, Vanderbilt University)

So scientists assumed avian brains must be wired differently from primate brains. Two years ago, even this hypothesis was knocked down by a detailed study of pigeon brains, which concluded that they are, in fact, organized along quite similar lines to those of primates.

More neurons in the forebrain than previously thought

Top ten in number of whole-brain neurons and pallium (forebrain) neurons for the avian and mammalian species examined (credit: Seweryn Olkowicz et al./PNAS)

The new study provides a plausible explanation: Birds can perform these complex behaviors because birds’ forebrains contain a lot more neurons than any one had previously thought — as many as in mid-sized primates.

“We found that birds, especially songbirds and parrots, have surprisingly large numbers of neurons in their pallium: the part of the brain that corresponds to the cerebral cortex, which supports higher cognition functions such as planning for the future or finding patterns. That explains why they exhibit levels of cognition at least as complex as primates,” said Herculano-Houzel.

That’s because the neurons in avian brains are much smaller and more densely packed than those in mammalian brains, the study found. Parrot and songbird brains, for example, contain about twice as many neurons as primate brains of the same mass and two to four times as many neurons as equivalent rodent brains.

Also, the proportion of neurons in the forebrain is significantly higher, the study found.

More than one way to build better brains

“In designing brains, nature has two parameters it can play with: the size and number of neurons and the distribution of neurons across different brain centers,” said Herculano-Houzel, “and in birds we find that nature has used both of them.”

Although she acknowledges that the relationship between intelligence and neuron count has not yet been firmly established, Herculano-Houzel and her colleagues argue that having the same or greater forebrain neuron counts than primates with much larger brains can potentially provide the birds with much higher “cognitive power” per pound than mammals.

In other words, there’s more than one way to build better brains. Previously, neuroanatomists thought that as brains grew larger, neurons had to grow bigger as well because they had to connect over longer distances. “But bird brains show that there are other ways to add neurons: Keep most neurons small and locally connected and only allow a small percentage to grow large enough to make the longer connections. This keeps the average size of the neurons down,” she explained.

But that raises troubling questions:

Does the surprisingly large number of neurons in bird brains comes at a correspondingly large energetic cost?

Are the small neurons in bird brains a response to selection for small body size due to flight, or possibly the ancestral way of adding neurons to the brain — from which mammals, not birds, may have diverged.

Herculano-Houzel hopes that the results of the study and the questions it raises will stimulate other neuroscientists to begin exploring the mysteries of the avian brain, especially how their behavior compares to that of mammals of similar numbers of neurons or brain size.

Researchers at Charles University in Prague and the University of Vienna were also involved in the study.

Vanderbilt University | Bird Brain: Smarter Than You Think

Vanderbilt University | Study gives new meaning to the term “bird brain”

Abstract of Birds have primate-like numbers of neurons in the forebrain

Some birds achieve primate-like levels of cognition, even though their brains tend to be much smaller in absolute size. This poses a fundamental problem in comparative and computational neuroscience, because small brains are expected to have a lower information-processing capacity. Using the isotropic fractionator to determine numbers of neurons in specific brain regions, here we show that the brains of parrots and songbirds contain on average twice as many neurons as primate brains of the same mass, indicating that avian brains have higher neuron packing densities than mammalian brains. Additionally, corvids and parrots have much higher proportions of brain neurons located in the pallial telencephalon compared with primates or other mammals and birds. Thus, large-brained parrots and corvids have forebrain neuron counts equal to or greater than primates with much larger brains. We suggest that the large numbers of neurons concentrated in high densities in the telencephalon substantially contribute to the neural basis of avian intelligence.

A meta-analysis of 45 studies (64 publications) of consumption of whole grain by an international team of researchers, led by Dagfinn Aune, PhD, at Imperial College London, found lower risks of coronary heart disease and cardiovascular disease overall, as well as deaths from all causes and from specific diseases, including stroke, cancer, diabetes, infectious and respiratory diseases.

The researchers say these results “strongly support dietary recommendations to increase intake of whole grain foods in the general population to reduce risk of chronic diseases and premature mortality.”

The results have been published in an open-access paper in the British Medical Journal (BMJ).

The greatest benefit was seen for people who increased from no intake of whole grain to two servings per day, equivalent to 32 g/day, such as 32 g of whole grain wheat, or to 60 g product/day, such as 60 g of whole grain wheat bread.

Further reductions in risks were observed up to 7.5 servings a day, equivalent to 225 g/day of whole grain products, and suggest additional benefits at higher intakes.

Relation to specific types of disorders

A large body of evidence has emerged on the health benefits of whole grain foods over the last 10–15 years. Grains are one of the major staple foods worldwide and provide on average 56% of energy intake and 50% of protein intake.

But recommendations on the daily amount and types of whole grain foods needed to reduce risk of chronic disease and mortality have often been unclear or inconsistent. So the researchers carried out a systematic review and meta-analysis of 45 published studies on whole grain consumption in relation to several health outcomes and all-cause mortality.*

Reductions in risks of cardiovascular disease and all-cause mortality were associated with intake of whole grain bread, whole grain breakfast cereals, and added bran, as well as total intake of bread and breakfast cereals.

There was little evidence of an association with intake of refined grains, white rice, total rice or other grains.

Caveats and recommendations

Few people may have total grain intake of three or more servings a day, so the authors recommend “increasing intake of whole grains, and as much as possible to choose whole grains rather than refined grains.”

However, the researchers noted that systematic reviews and meta-analyses involving observational research cannot be used to draw conclusions about cause and effect.

They call for more research to determine health benefits of different types of whole grain in different geographical regions, as most of the current evidence is from the U.S. and fewer studies have been conducted in Europe, Asia and other regions. Studies of specific diseases, and less common causes of deaths, are needed.

They caution that it’s important that “great care” should be taken not to promote whole grain foods with high sugar and salt content, and call for more research on the “biological mechanisms of health effects and contribution to health of different grain types.”

A related study published in The Journals of Gerontology, Series A (recently described on KurzweilAI — see Dietary fiber has biggest influence on successful aging, research reveals) found that fiber that made the biggest difference to what the researchers termed “successful aging,” meaning “the absence of disability, depressive symptoms, cognitive impairment, respiratory symptoms, and chronic diseases including cancer, coronary artery disease, and stroke.”

* They included more than 7,000 cases of coronary heart disease, 2,000 cases of stroke, 26,000 cases of cardiovascular disease, 34,000 deaths from cancer, and 100,000 deaths among 700,000 participants.

Abstract of Whole grain consumption and risk of cardiovascular disease, cancer, and all cause and cause specific mortality: systematic review and dose-response meta-analysis of prospective studies

Objective To quantify the dose-response relation between consumption of whole grain and specific types of grains and the risk of cardiovascular disease, total cancer, and all cause and cause specific mortality.

Data sources PubMed and Embase searched up to 3 April 2016.

Study selection Prospective studies reporting adjusted relative risk estimates for the association between intake of whole grains or specific types of grains and cardiovascular disease, total cancer, all cause or cause specific mortality.

Results 45 studies (64 publications) were included. The summary relative risks per 90 g/day increase in whole grain intake (90 g is equivalent to three servings—for example, two slices of bread and one bowl of cereal or one and a half pieces of pita bread made from whole grains) was 0.81 (95% confidence interval 0.75 to 0.87; I2=9%, n=7 studies) for coronary heart disease, 0.88 (0.75 to 1.03; I2=56%, n=6) for stroke, and 0.78 (0.73 to 0.85; I2=40%, n=10) for cardiovascular disease, with similar results when studies were stratified by whether the outcome was incidence or mortality. The relative risks for morality were 0.85 (0.80 to 0.91; I2=37%, n=6) for total cancer, 0.83 (0.77 to 0.90; I2=83%, n=11) for all causes, 0.78 (0.70 to 0.87; I2=0%, n=4) for respiratory disease, 0.49 (0.23 to 1.05; I2=85%, n=4) for diabetes, 0.74 (0.56 to 0.96; I2=0%, n=3) for infectious diseases, 1.15 (0.66 to 2.02; I2=79%, n=2) for diseases of the nervous system disease, and 0.78 (0.75 to 0.82; I2=0%, n=5) for all non-cardiovascular, non-cancer causes. Reductions in risk were observed up to an intake of 210-225 g/day (seven to seven and a half servings per day) for most of the outcomes. Intakes of specific types of whole grains including whole grain bread, whole grain breakfast cereals, and added bran, as well as total bread and total breakfast cereals were also associated with reduced risks of cardiovascular disease and/or all cause mortality, but there was little evidence of an association with refined grains, white rice, total rice, or total grains.

Conclusions This meta-analysis provides further evidence that whole grain intake is associated with a reduced risk of coronary heart disease, cardiovascular disease, and total cancer, and mortality from all causes, respiratory diseases, infectious diseases, diabetes, and all non-cardiovascular, non-cancer causes. These findings support dietary guidelines that recommend increased intake of whole grain to reduce the risk of chronic diseases and premature mortality.

The Evolutionary Origins of Hierarchy: Evolution with performance-only selection results in non-hierarchical and non-modular networks, which take longer to adapt to new environments. However, evolving networks with a connection cost creates hierarchical and functionally modular networks that can solve the overall problem by recursively solving its sub-problems. These networks also adapt to new environments faster. (credit: Henok Mengistu et al./PLOS Comp. Bio)

New research suggests why the human brain and other biological networks exhibit a hierarchical structure, and the study may improve attempts to create artificial intelligence.

The study, by researchers from the University of Wyoming and the French Institute for Research in Computer Science and Automation (INRIA, in France), demonstrates that the evolution of hierarchy — a simple system of ranking — in biological networks may arise because of the costs associated with network connections.

This study also supports Ray Kurzweil’s theory of the hierarchical structure of the neocortex, presented in his 2012 book, How to Create a Mind.

The human brain has separate areas for vision, motor control, and tactile processing, for example, and each of these areas consist of sub-regions that govern different parts of the body.

Evolutionary pressure to reduce the number and cost of connections

The research findings suggest that hierarchy evolves not because it produces more efficient networks, but instead because hierarchically wired networks have fewer connections. That’s because connections in biological networks are expensive — they have to be built, maintained, etc. — so there’s an evolutionary pressure to reduce the number of connections.

In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings may also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.

The research, led by Henok S. Mengistu, is described in an open-access paper in PLOS Computational Biology. The researchersalso simulated the evolution of computational brain models, known as artificial neural networks, both with and without a cost for network connections. They found that hierarchical structures emerge much more frequently when a cost for connections is present.

Aside from explaining why biological networks are hierarchical, the research might also explain why many man-made systems such as the Internet and road systems are also hierarchical. “The next step is to harness and combine this knowledge to evolve large-scale, structurally organized networks in the hopes of creating better artificial intelligence and increasing our understanding of the evolution of animal intelligence, including our own,” according to the researchers.

Abstract of The Evolutionary Origins of Hierarchy

Hierarchical organization—the recursive composition of sub-modules—is ubiquitous in biological networks, including neural, metabolic, ecological, and genetic regulatory networks, and in human-made systems, such as large organizations and the Internet. To date, most research on hierarchy in networks has been limited to quantifying this property. However, an open, important question in evolutionary biology is why hierarchical organization evolves in the first place. It has recently been shown that modularity evolves because of the presence of a cost for network connections. Here we investigate whether such connection costs also tend to cause a hierarchical organization of such modules. In computational simulations, we find that networks without a connection cost do not evolve to be hierarchical, even when the task has a hierarchical structure. However, with a connection cost, networks evolve to be both modular and hierarchical, and these networks exhibit higher overall performance and evolvability (i.e. faster adaptation to new environments). Additional analyses confirm that hierarchy independently improves adaptability after controlling for modularity. Overall, our results suggest that the same force–the cost of connections–promotes the evolution of both hierarchy and modularity, and that these properties are important drivers of network performance and adaptability. In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings will also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.

Neurons need large amounts of energy to extend their axons long distances through the body. This energy — in the form of adenosine triphosphate (ATP) — is provided by mitochondria.

During development, mitochondria are transported up and down growing axons to generate ATP wherever it is needed. In adults, however, mitochondria become less mobile as mature neurons produce a protein called syntaphilin that anchors the mitochondria in place.

Zu-Hang Sheng and colleagues at the National Institute of Neurological Disorders and Stroke wondered whether this decrease in mitochondrial transport might explain why adult neurons are typically unable to regrow after injury.

They initially found that when mature mouse axons are severed, nearby mitochondria are damaged and become unable to provide sufficient ATP to support injured nerve regeneration. However, when the researchers experimentally removed syntaphilin from the nerve cells (by using a genetically modified mouse), mitochondrial transport was enhanced, allowing the damaged mitochondria to be replaced by healthy mitochondria capable of producing ATP.

The Syntaphilin-deficient mature neurons therefore regained the ability to regrow after injury, just like young neurons.

“Our in vivo and in vitro studies suggest that activating an intrinsic growth program requires the coordinated modulation of mitochondrial transport and recovery of energy deficits. Such combined approaches may represent a valid therapeutic strategy to facilitate regeneration in the central and peripheral nervous systems after injury or disease,” Sheng says.

Small electrical currents appear to activate certain immune cells to jumpstart or speed wound healing and reduce infection when there’s a lack of immune cells available, such as with diabetes, University of Aberdeen (U.K.) scientists have found.

In a lab experiment, the scientists exposed healing macrophages (white blood cells that eat things that don’t belong), taken from human blood, to electrical fields of strength similar to that generated in injured skin. When the voltage was applied, the macrophages moved in a directed manner to Candida albicans fungus cells (representing damaged skin) to facilitate healing (engulfing and digesting extracellular particles). (This process is called “phagocytosis,” in which macrophages clean the wound site, limit infection, and allow the repair process to proceed.)

The electric fields enhanced the uptake and clearance of a variety of targets known to promote inflammation and impair healing in addition to Candida albicans, including latex beads and expended white blood cells.*

“These findings raise the prospect that EF-based therapies could be extended beyond tissue repair and ultimately, be exploited to modulate the function of macrophages in other inflammatory diseases where these cells are dysregulated,” the researchers note in a report appearing in the June 2016 issue of the Journal of Leukocyte Biology.

“This new work identifies previously unappreciated opportunities to tune immune system function with electrical fields and has potentially wide-reaching implications for wound repair for a variety of diseases where macrophages play a role, including infectious disease, cancer and even obesity,” said John Wherry, Ph.D., Deputy Editor of the Journal of Leukocyte Biology.

* The experiments also showed that electric fields selectively augmented the production of protein modulators associated with the healing process, enhancing cytokine (growth factor) production and phagocytic activity essential for clearance of infection and for tissue repair and confirming that macrophages are tuned to respond to naturally generated electrical signals in a manner that boosts their healing ability.

Macrophages are key cells in inflammation and repair, and their activity requires close regulation. The characterization of cues coordinating macrophage function has focused on biologic and soluble mediators, with little known about their responses to physical stimuli, such as the electrical fields that are generated naturally in injured tissue and which accelerate wound healing. To address this gap in understanding, we tested how properties of human monocyte-derived macrophages are regulated by applied electrical fields, similar in strengths to those established naturally. With the use of live-cell video microscopy, we show that macrophage migration is directed anodally by electrical fields as low as 5 mV/mm and is electrical field strength dependent, with effects peaking ∼300 mV/mm. Monocytes, as macrophage precursors, migrate in the opposite, cathodal direction. Strikingly, we show for the first time that electrical fields significantly enhance macrophage phagocytic uptake of a variety of targets, including carboxylate beads, apoptotic neutrophils, and the nominal opportunist pathogen Candida albicans, which engage different classes of surface receptors. These electrical field-induced functional changes are accompanied by clustering of phagocytic receptors, enhanced PI3K and ERK activation, mobilization of intracellular calcium, and actin polarization. Electrical fields also modulate cytokine production selectively and can augment some effects of conventional polarizing stimuli on cytokine secretion. Taken together, electrical signals have been identified as major contributors to the coordination and regulation of important human macrophage functions, including those essential for microbial clearance and healing. Our results open up a new area of research into effects of naturally occurring and clinically applied electrical fields in conditions where macrophage activity is critical.

The researchers at VTT Technical Research Centre of Finland have developed a hybrid nano-electrode that’s only a few nanometers thick. It consists of porous silicon coated with a titanium nitride layer formed by atomic layer deposition.

The nano-electrode design features the highest-ever conductive surface-to-volume ratio. That combined with an ionic liquid (in a microchannel formed in between two electrodes), results in an extremely small form factor and efficient energy storage. That design makes it possible for a silicon-based micro-supercapacitor to achieve higher energy storage (energy density) and faster charge/discharge (power density) than the leading carbon- and graphene-based supercapacitors, according to the researchers.

The micro-supercapacitor can store 0.2 joule (55 microwatts of power for one hour) on a one-square-centimeter silicon chip. This design also leaves the surface of the chip available for active integrated microcircuits and sensors.

An open-access paper on the research has been published in Nano Energy journal.

Abstract of Conformal titanium nitride in a porous silicon matrix: A nanomaterial for in-chip supercapacitors

Today’s supercapacitor energy storages are typically discrete devices aimed for printed boards and power applications. The development of autonomous sensor networks and wearable electronics and the miniaturization of mobile devices would benefit substantially from solutions in which the energy storage is integrated with the active device. Nanostructures based on porous silicon (PS) provide a route towards integration due to the very high inherent surface area to volume ratio and compatibility with microelectronics fabrication processes. Unfortunately, pristine PS has limited wettability and poor chemical stability in electrolytes and the high resistance of the PS matrix severely limits the power efficiency. In this work, we demonstrate that excellent wettability and electro-chemical properties in aqueous and organic electrolytes can be obtained by coating the PS matrix with an ultra-thin layer of titanium nitride by atomic layer deposition. Our approach leads to very high specific capacitance (15 F cm−3), energy density (1.3 mWh cm−3), power density (up to 214 W cm−3) and excellent stability (more than 13,000 cycles). Furthermore, we show that the PS–TiN nanomaterial can be integrated inside a silicon chip monolithically by combining MEMS and nanofabrication techniques. This leads to realization of in-chip supercapacitor, i.e., it opens a new way to exploit the otherwise inactive volume of a silicon chip to store energy.

University of Maryland researchers have developed a method to quickly and inexpensively assemble diamond-based hybrid nanoparticles from the ground up in large quantities while avoiding many of the problems with current methods.

These hybrid nanoparticles could speed the design of room-temperature qubits for quantum computers and create brighter dyes for biomedical imaging or highly sensitive magnetic and temperature sensors, for example.

The basic trick in creating a interesting or useful diamond is, ironically: Add a defect in the diamond’s crystal lattice. It’s similar to doping silicon to give it special electronic properties (such as making it work as a transistor).

Pure diamonds consist of an orderly lattice of carbon atoms and are completely transparent. However, pure diamonds are quite rare in natural diamond deposits; most have defects resulting from non-carbon impurities such as nitrogen, boron and phosphorus. Such defects create the subtle and desirable color variations seen in gemstone diamonds.

This altered bond is also the source of the optical, electromagnetic, and quantum physical properties that will make a nanodiamond useful when paired with other nanomaterials.

The most useful impurity — and used in the Maryland study — is the famous “nitrogen vacancy” defect: Sticking in a single nitrogen atom where a carbon atom should be, with an empty space right next to it.

As KurzweilAI has shown in several articles, a nitrogen vacancy in a diamond (or other crystalline materials) can lead to a variety of interesting new properties, such as a highly sensitive way to detect neural signals, an ultrasensitive real-time magnetic field detector, and importantly, making a nanodiamond behave as a quantum bit (qubit) for use in quantum computing and other applications.

Nearly all qubits studied to date require ultra-cold temperatures to function properly. A qubit that works at room temperature would represent a significant step forward, helping use quantum circuits in industrial, commercial and consumer-level electronics. That’s of special interest to Ougang’s team.

Ougang’s and colleagues’ main breakthrough, though, is their method for constructing the hybrid nanoparticles. Other researchers have paired nanodiamonds with complementary nanoparticles using relatively imprecise methods, such as manually installing the diamonds and particles next to each other onto a larger surface one by one.

His team’s method also enables precise control of the hybrid particles’ properties, such as the composition and total number of non-diamond particles.

“A major strength of our technique is that it is broadly useful and can be applied to a variety of diamond types and paired with a variety of other nanomaterials,” Ouyang said. “It can also be scaled up fairly easily. We are interested in studying the basic physics further, but also moving toward specific applications.”

The ability to control the interaction between nitrogen-vacancy centres in diamond and photonic and/or broadband plasmonic nanostructures is crucial for the development of solid-state quantum devices with optimum performance. However, existing methods typically employ top-down fabrication, which restrict scalable and feasible manipulation of nitrogen-vacancy centres. Here, we develop a general bottom-up approach to fabricate an emerging class of freestanding nanodiamond-based hybrid nanostructures with external functional units of either plasmonic nanoparticles or excitonic quantum dots. Precise control of the structural parameters (including size, composition, coverage and spacing of the external functional units) is achieved, representing a pre-requisite for exploring the underlying physics. Fine tuning of the emission characteristics through structural regulation is demonstrated by performing single-particle optical studies. This study opens a rich toolbox to tailor properties of quantum emitters, which can facilitate design guidelines for devices based on nitrogen-vacancy centres that use these freestanding hybrid nanostructures as building blocks.

In this artist’s conception, a carbon planet orbits a sunlike star in the early universe. Young planetary systems lacking heavy chemical elements but relatively rich in carbon could form worlds made of graphite, carbides, and diamond rather than Earth-like silicate rocks. Blue patches show where water has pooled on the planet’s surface, forming potential habitats for alien life. (credit: Christine Pulliam (CfA), Sun image: NASA/SDO)

New findings by scientists at the Harvard-Smithsonian Center for Astrophysics (CfA) suggest that planet formation in the early universe might have created carbon planets consisting of graphite, carbides, and diamond and that astronomers might find these diamond worlds by searching a rare class of stars.

“This work shows that even stars with a tiny fraction of the carbon in our solar system can host planets,” says lead author and Harvard University graduate student Natalie Mashian. “We have good reason to believe that alien life will be carbon-based, like life on Earth, so this also bodes well for the possibility of life in the early universe.”

The primordial universe consisted mostly of hydrogen and helium, and lacked chemical elements like carbon and oxygen necessary for life as we know it. Only after the first stars exploded as supernovae and seeded the second generation did planet formation and life become possible.

Clues to how life got started in the universe

Mashian and her PhD thesis advisor Avi Loeb examined a particular class of old stars known as carbon-enhanced metal-poor (CEMP) stars. These “anemic” stars contain only one hundred-thousandth as much iron as our Sun, meaning they formed before interstellar space had been widely seeded with heavy elements.

“These stars are fossils from the young universe,” explains Loeb. “By studying them, we can look at how planets, and possibly life in the universe, got started.”

CEMP stars have more carbon than would be expected, given their age. This relative abundance would influence planet formation as fluffy carbon dust grains (from supernovae) clump together to form tar-black worlds.

From a distance, these carbon planets would be difficult to tell apart from more Earth-like worlds. Their masses and physical sizes would be similar. Astronomers would have to examine their atmospheres for signs of their true nature. Gases like carbon monoxide and methane would envelop these unusual worlds.

The transit technique for detecting carbon planets

When a planet crosses in front of its star as viewed by an observer, the event is called a transit. The relative change in flux caused by a carbon-based planet transiting across its host CEMP star would range from ~0.0001% to ~0.01%. (credit: NASA Ames)

But a dedicated search for planets around CEMP stars can be done using the transit technique, the scientists suggest. We encountered the transit technique on KurzweilAI in “How to use laser cloaking to hide Earth from remote detection by aliens” — in which we noted we two Columbia University astronomers suggested that we could cloak our Earth from aliens by shining a laser during transits so they couldn’t see the tiny giveaway changes in brightness — or that we could modify the light from our Sun during a transit to make it obviously artificial, sending a message to ET: “we’re here.”

In the new CfA paper, published in the Monthly Notices of the Royal Astronomical Society*, the scientists note that the relative change in flux caused by a carbon-based planet transiting across its host CEMP star ranges from ~0.0001% to ~0.01% — too weak to be detected by ground telescopes. That means it would require “space-based transit surveys that continuously monitor a large number of potential host stars over several years and measure their respective transit light curves.”

Fortunately, “there are a number of ongoing, planned, and proposed space missions committed to this cause,” the CfA scientists note. No word if the space-based transit surveys will also watch for artificial messages.

Headquartered in Cambridge, Mass., the Harvard-Smithsonian Center for Astrophysics (CfA) is a joint collaboration between the Smithsonian Astrophysical Observatory and the Harvard College Observatory. CfA scientists, organized into six research divisions, study the origin, evolution and ultimate fate of the universe.

* The CfA study is also available (open-access) on arXiv.

Abstract of CEMP stars: possible hosts to carbon planets in the early universe

We explore the possibility of planet formation in the carbon-rich protoplanetary disks of carbon-enhanced metal-poor (CEMP) stars, possible relics of the early Universe. The chemically anomalous abundance patterns ([C/Fe] ≥ 0.7) in this subset of low-mass stars suggest pollution by primordial core-collapsing supernovae (SNe) ejecta that are particularly rich in carbon dust grains. By comparing the dust-settling timescale in the protoplanetary disks of CEMP stars to the expected disk lifetime (assuming dissipation via photoevaporation), we determine the maximum distance rmax from the host CEMP star at which carbon-rich planetesimal formation is possible, as a function of the host star’s [C/H] abundance. We then use our linear relation between rmax and [C/H], along with the theoretical mass-radius relation derived for a solid, pure carbon planet, to characterize potential planetary transits across host CEMP stars. Given that the related transits are detectable with current and upcoming space-based transit surveys, we suggest initiating an observational program to search for carbon planets around CEMP stars in hopes of shedding light on the question of how early planetary systems may have formed after the Big Bang.

Triclosan, a common antibacterial ingredient found in many products such as toothpastes, soaps, and detergents to reduce or prevent bacterial infections, has been linked to making bacteria resistant to antibiotics, with adverse health effects. The European Union has restricted the use of triclosan in cosmetics, and the U.S. FDA is conducting an ongoing review of this ingredient.

Destroying the cell membrane to prevent resistance to antibiotics

To find a more suitable alternative, IBN Group Leader Yugen Zhang, PhD, and his team synthesized a chemical compound made up of molecules linked together in a chain (“imidazolium oligomers”), which they found can kill 99.7% of the E. coli bacteria within 30 seconds. The chain-like structure helps to penetrate the cell membrane and destroy the bacteria.

In contrast, current antibiotics only kill the bacteria but fail to destroy the cell membrane, allowing new antibiotic-resistant bacteria to grow.

“Our unique material can kill bacteria rapidly and inhibit the development of antibiotic-resistant bacteria. Computational chemistry studies supported our experimental findings that the chain-like compound works by attacking the cell membrane. This material is also safe for use because it carries a positive charge that targets the more negatively charged bacteria, without destroying red blood cells,” said Zhang.

The imidazolium oligomers come in the form of a white powder that is soluble in water. The researchers also found that once this was dissolved in alcohol, it formed gels spontaneously. So this material could be incorporated in alcohol-based sprays used for sterilization in hospitals or homes.

E. coli is a type of bacteria found in the intestines of humans and animals, and some strains can cause severe diarrhea, abdominal pain, and fever. Such infection is contagious and can spread through contaminated food or water, or from contact with people or animals.

Besides E. coli, IBN’s material was tested against other common strains of antibiotic-resistant bacteria and fungi such as Staphylococcus aureus, Pseudomonas aeruginosa, and Candida albicans (which can cause conditions ranging from skin infections to pneumonia and toxic shock syndrome) and was able to kill 99.9% of these microbes within two minutes.

Infectious diseases and the increasing threat of worldwide pandemics have underscored the importance of antibiotics and hygiene. Intensive efforts have been devoted to developing new antibiotics to meet the rapidly growing demand. In particular, advancing the knowledge of the structure–property–activity relationship is critical to expedite the design and development of novel antimicrobial with the needed potential and efficacy. Herein, a series of new antimicrobial imidazolium oligomers are developed with the rational manipulation of terminal group’s hydrophobicity. These materials exhibit superior activity, excellent selectivity, ultrafast killing (>99.7% killing within 30 s), and desirable self-gelling properties. Molecular dynamic simulations reveal the delicate effect of structural changes on the translocation motion across the microbial cell membrane. The energy barrier of the translocation process analyzed by free energy calculations provides clear kinetic information to suggest that the spontaneous penetration requires a very short timescale of seconds to minutes for the new imidazolium oligomers.

Leading genomics experts have announced Genome Project-write (HGP-write), which aims to synthesize entire genomes of humans and other species from chemical components and get them to function in living cells.

As explained in Science, the goal of HGP-write is to reduce the costs of engineering large genomes, including a human genome, and to develop an ethical framework for genome-scale engineering and transformative medical applications.

HGP-write will build on the knowledge gained by The Human Genome Project (HGP-read), especially in genomic-based discovery, diagnostics, and therapeutics. But while the Human Genome Project “read” DNA to understand its code, HGP-write will use the cellular machinery provided by nature to “write” code, constructing vast DNA chains.

The goal is to launch HGP-write in 2016 with $100 million in committed support from public, private, philanthropic, industry, and academic sources globally. Autodesk has already contributed a leadership gift of $250,000 to seed the planning and launch of HGP-write.

According to the authors of the Science commentary, although “…sequencing, analyzing and editing DNA continues to advance at breakneck pace, the capability to construct DNA sequences in cells is mostly limited to a small number of short segments, restricting the ability to manipulate and understand biological systems.”

Exponential improvements in genome engineering

The new effort is expected to lead to a massive amount of information connecting the sequence of nucleotide bases in DNA with their physiological properties and functions, and it promises to have a significant impact on human health and other critical areas such as energy, agriculture, chemicals, and bioremediation, according to the organizers.

HGP-write will be implemented through a new, independent nonprofit organization, the Center of Excellence for Engineering Biology as an open, international, multi-disciplinary research project.*

“This grand challenge is more ambitious and more focused on understanding the practical applications than the original Human Genome Project,” said George Church, Ph.D., Robert Winthrop Professor of Genetics at Harvard Medical School. “Exponential improvements in genome engineering technologies and functional testing provide an opportunity to deepen the understanding of our genetic blueprint and use this knowledge to address many of the global problems facing humanity.”

Another goal is development of new genomics analysis, design, synthesis, assembly and testing technologies, with the goal of making them more affordable and widely available. “Writing DNA code is the future of science and medicine, but our technical capabilities remain limited,” said Andrew Hessel, Distinguished Researcher, Bio/Nano Research Group, Autodesk, who will head a multidisciplinary team exploring computer-aided design and manufacturing for biotechnology and nanotechnology R&D.

“HGP-write will require research and development on a grand scale, and this effort will help to push our current technical limits by several orders of magnitude,” he said.

The HGP-write project developed from a series of meetings held over the last several years, including a closed-door meeting held May 10 in Boston, which brought together a diverse group of 130 participants from many different countries, including biologists, ethicists, engineers, plus representatives from industry, law and government.

* The Genome Project-write (HGP-write) will be an open, international research project led by a multi-disciplinary group of scientific leaders who plan to oversee a reduction in the costs of engineering and testing large genomes, including a human genome, in cell lines more than 1,000-fold within ten years.

To ensure public engagement and transparency, HGP-write will include mechanisms to encourage public discourse around the emerging HGP-write capabilities. The Woodrow Wilson Center for International Scholars will help to lead such efforts for HGP-write.

George Church, Ph.D., Robert Winthrop Professor of Genetics at Harvard Medical School, Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard University, Professor of Health Sciences and Technology at Harvard and the Massachusetts Institute of Technology (MIT), and Senior Associate Faculty member at the Broad Institute. Among the leaders of the original HGP-read, Dr. Church currently heads an effort to create a version of the bacteria E.coli with a redesigned genome.

Andrew Hessel, Distinguished Researcher, Bio/Nano Research Group, Autodesk. He spearheads a multidisciplinary team exploring computer-aided design and manufacturing for biotechnology and nanotechnology R&D.

Nancy J Kelley, J.D., M.P.P., President & CEO, Nancy J Kelley & Associates, formerly Founding Executive Director, New York Genome Center. She is lead executive of HGP-write and the related Center of Excellence for Engineering Biology.

Abstract of The Genome Project–Write

The Human Genome Project (“HGP-read”) nominally completed in 2004 aimed to sequence the human genome and improve technology, cost, and quality of DNA sequencing (1, 2). It was biology’s first genome-scale project, and at the time was considered controversial by some. Now it is recognized as one of the great feats of exploration, one that has revolutionized science and medicine.

Although sequencing, analyzing, and editing DNA continue to advance at breakneck pace, the capability to construct DNA sequences in cells is mostly limited to a small number of short segments, restricting the ability to manipulate and understand biological systems. Further understanding of genetic blueprints could come from construction of large, gigabase (Gb)–sized animal and plant genomes, including the human genome, which would in turn drive development of tools and methods to facilitate large-scale synthesis and editing of genomes. To this end, we propose the Human Genome Project–Write (HGP-write).

Injecting specially prepared human adult stem cells directly into the brains of chronic stroke patients proved safe and effective in restoring motor (muscle) function in a small clinical trial led by Stanford University School of Medicine investigators.

The 18 patients had suffered their first and only stroke between six months and three years before receiving the injections, which involved drilling a small hole through their skulls.

For most patients, at least a full year had passed since their stroke — well past the time when further recovery might be hoped for. In each case, the stroke had taken place beneath the brain’s outermost layer, or cortex, and had severely affected motor function. “Some patients couldn’t walk,” Steinberg said. “Others couldn’t move their arm.”

Sonia Olea Coontz had a stroke in 2011 that affected the movement of her right arm and leg. After modified stem cells were injected into her brain as part of a clinical trial, she says her limbs “woke up.” (credit: Mark Rightmire/Stanford University School of Medicine)

One of those patients, Sonia Olea Coontz, of Long Beach, California, now 36, had a stroke in May 2011. “My right arm wasn’t working at all,” said Coontz. “It felt like it was almost dead. My right leg worked, but not well.” She walked with a noticeable limp. “I used a wheelchair a lot. After my surgery, they woke up,” she said of her limbs.

‘Clinically meaningful’ results

The promising results set the stage for an expanded trial of the procedure now getting underway. They also call for new thinking regarding the permanence of brain damage, said Gary Steinberg, MD, PhD, professor and chair of neurosurgery.

“This was just a single trial, and a small one,” cautioned Steinberg, who led the 18-patient trial and conducted 12 of the procedures himself. (The rest were performed at the University of Pittsburgh.) “It was designed primarily to test the procedure’s safety. But patients improved by several standard measures, and their improvement was not only statistically significant, but clinically meaningful. Their ability to move around has recovered visibly. That’s unprecedented. At six months out from a stroke, you don’t expect to see any further recovery.”

The trial’s results are detailed in a paper published online June 2 in Stroke. Steinberg, who has more than 15 years’ worth of experience in work with stem cell therapies for neurological indications, is the paper’s lead and senior author.

The procedure involved injecting SB623 mesenchymal stem cells, derived from the bone marrow of two donors and then modified to beneficially alter the cells’ ability to restore neurologic function.*

Motor-function improvements

Substantial improvements were seen in patients’ scores on several widely accepted metrics of stroke recovery. Perhaps most notably, there was an overall 11.4-point improvement on the motor-function component of the Fugl-Meyer test, which specifically gauges patients’ movement deficits. “Patients who were in wheelchairs are walking now,” said Steinberg, who is the Bernard and Ronni Lacroute-William Randolph Hearst Professor in Neurosurgery and Neurosciences.

“We know these cells don’t survive for more than a month or so in the brain,” he added. “Yet we see that patients’ recovery is sustained for greater than one year and, in some cases now, more than two years.”

Importantly, the stroke patients’ postoperative improvement was independent of their age or their condition’s severity at the onset of the trial. “Older people tend not to respond to treatment as well, but here we see 70-year-olds recovering substantially,” Steinberg said. “This could revolutionize our concept of what happens after not only stroke, but traumatic brain injury and even neurodegenerative disorders. The notion was that once the brain is injured, it doesn’t recover — you’re stuck with it. But if we can figure out how to jump-start these damaged brain circuits, we can change the whole effect.

“We thought those brain circuits were dead. And we’ve learned that they’re not.”

New trial now recruiting 156 patients

A new randomized, double-blinded multicenter phase-2b trial aiming to enroll 156 chronic stroke patients is now actively recruiting patients. Steinberg is the principal investigator of that trial. For more information, you can e-mail stemcellstudy@stanford.edu. “There are close to 7 million chronic stroke patients in the United States,” Steinberg said. “If this treatment really works for that huge population, it has great potential.”

Some 800,000 people suffer a stroke each year in the United States alone. About 85 percent of all strokes are ischemic: They occur when a clot forms in a blood vessel supplying blood to part of the brain, with subsequent intensive damage to the affected area. The specific loss of function incurred depends on exactly where within the brain the stroke occurs, and on its magnitude.

Although approved therapies for ischemic stroke exist, to be effective they must be applied within a few hours of the event — a time frame that often is exceeded by the amount of time it takes for a stroke patient to arrive at a treatment center.

Consequently, only a small fraction of patients benefit from treatment during the stroke’s acute phase. The great majority of survivors end up with enduring disabilities. Some lost functionality often returns, but it’s typically limited. And the prevailing consensus among neurologists is that virtually all recovery that’s going to occur comes within the first six months after the stroke.

* Mesenchymal stem cells are the naturally occurring precursors of muscle, fat, bone and tendon tissues. In preclinical studies, though, they’ve not been found to cause problems by differentiating into unwanted tissues or forming tumors. Easily harvested from bone marrow, they appear to trigger no strong immune reaction in recipients even when they come from an unrelated donor. In fact, they may actively suppress the immune system. For this trial, unlike the great majority of transplantation procedures, the stem cell recipients received no immunosuppressant drugs.

During the procedure, patients’ heads were held in fixed positions while a hole was drilled through their skulls to allow for the injection of SB623 cells, accomplished with a syringe, into a number of spots at the periphery of the stroke-damaged area, which varied from patient to patient.

Afterward, patients were monitored via blood tests, clinical evaluations and brain imaging. Interestingly, the implanted stem cells themselves do not appear to survive very long in the brain. Preclinical studies have shown that these cells begin to disappear about one month after the procedure and are gone by two months. Yet, patients showed significant recovery by a number of measures within a month’s time, and they continued improving for several months afterward, sustaining these improvements at six and 12 months after surgery. Steinberg said it’s likely that factors secreted by the mesenchymal cells during their early postoperative presence near the stroke site stimulates lasting regeneration or reactivation of nearby nervous tissue.

No relevant blood abnormalities were observed. Some patients experienced transient nausea and vomiting, and 78 percent had temporary headaches related to the transplant procedure.

Using data compiled from the Blue Mountains Eye Study, a benchmark population-based study that examined a cohort of more than 1,600 adults aged 50 years and older for long-term sensory loss risk factors and systemic diseases, the researchers found that out of all the factors they examined — including a person’s total carbohydrate intake, total fiber intake, glycemic index, glycemic load, and sugar intake — it was, surprisingly, fiber that made the biggest difference to what the researchers termed “successful aging.”

Successful aging was defined as including an absence of disability, depressive symptoms, cognitive impairment, respiratory symptoms, and chronic diseases including cancer, coronary artery disease, and stroke.

Fiber, or roughage, is the indigestible part of plant foods that pushes through the digestive system, absorbing water along the way and easing bowel movements.

According to lead author of the paper, Associate Professor Bamini Gopinath, PhD, from the Institute’s Centre for Vision Research, “Out of all the variables that we looked at, fiber intake —- which is a type of carbohydrate that the body can’t digest — had the strongest influence,” she said. “Essentially, we found that those who had the highest intake of fiber or total fiber actually had an almost 80 percent greater likelihood of living a long and healthy life over a 10-year follow-up. That is, they were less likely to suffer from hypertension, diabetes, dementia, depression, and functional disability.”

While it might have been expected that the level of sugar intake would make the biggest impact on successful aging, Gopinath pointed out that the particular group they examined were older adults whose intake of carbonated and sugary drinks was quite low.

Although it is too early to use the study results as a basis for dietary advice, Gopinath said the research has opened up a new avenue for exploration. “There are a lot of other large cohort studies that could pursue this further and see if they can find similar associations. And it would also be interesting to tease out the mechanisms that are actually linking these variables,” she said.

This study backs up similar recent findings by the researchers, which highlight the importance of the overall diet and healthy aging.

In another study published last year in The Journals of Gerontology, Westmead Institute researchers found that, in general, adults who closely adhered to recommended national dietary guidelines reached old age with an absence of chronic diseases and disability, and had good functional and mental health status.

Abstract of Association Between Carbohydrate Nutrition and Successful Aging Over 10 Years

Methods: A total of 1,609 adults aged 49 years and older who were free of cancer, coronary artery disease, and stroke at baseline were followed for 10 years. Dietary data were collected using a semiquantitative Food Frequency Questionnaire. Successful aging status was determined through interviewer-administered questionnaire at each visit and was defined as the absence of disability, depressive symptoms, cognitive impairment, respiratory symptoms, and chronic diseases (eg, cancer and coronary artery disease).

Results: In all, 249 (15.5%) participants had aged successfully 10 years later. Dietary GI, GL, and carbohydrate intake were not significantly associated with successful aging. However, participants in the highest versus lowest (reference group) quartile of total fiber intake had greater odds of aging successfully than suboptimal aging, multivariable-adjusted odds ratio (OR), 1.79 (95% confidence interval [CI] 1.13–2.84). Those who remained consistently below the median in consumption of fiber from breads/cereal and fruit compared with the rest of cohort were less likely to age successfully, OR 0.53 (95% CI 0.34–0.84) and OR 0.64 (95% CI 0.44–0.95), respectively.

Conclusions: Consumption of dietary fiber from breads/cereals and fruits independently influenced the likelihood of aging successfully over 10 years. These findings suggest that increasing intake of fiber-rich foods could be a successful strategy in reaching old age disease free and fully functional.

VAMPs are functionally modeled after the human bicep, similar to the biological muscle in terms of response time and efficiency. (credit: Wyss Institute at Harvard University)

If robots are going work around humans, they will have to be softer and safer. A Harvard team has designed a new actuator with that in mind. Its movements are similar to those of a human bicep muscle, using vacuum power to automate soft rubber beams. Like real muscles, the actuators are soft, shock-absorbing, and pose no danger, according to the researchers.

Whitesides’ team took an unconventional approach to its design, relying on vacuum to decrease the actuator’s volume and cause it to buckle. While conventional engineering would consider bucking to be a mechanical instability and a point of failure, in this case the team leveraged this instability to develop VAMPs (vacuum-actuated muscle-inspired pneumatic structures). Previous soft actuators rely on pressurized systems that expand in volume, but VAMPs mimic true muscle because they contract, which makes them useful in confined spaces and for a variety of purposes.

In this image, VAMPs are shown actuated and cut open in cross section. The honeycomb cross section shows the inner chambers that collapse when vacuum is applied. (credit: Wyss Institute at Harvard University)

The actuator has soft elastomeric rubber beams filled with small, hollow chambers of air like a honeycomb. By applying vacuum, the chambers collapse and the entire actuator contracts, generating movement. The internal honeycomb structure can be custom tailored to enable linear, twisting, bending, or combinatorial motions.

The team envisions that robots built with VAMPs could be used to assist the disabled or elderly, to serve food, deliver goods, and perform other tasks related to the service industry. Soft robots could also make industrial production lines safer and faster, and quality control easier to manage by enabling human operators to work in the same space.

Fail-safe design

VAMPs are designed to prevent failure — even when damaged with a 2mm hole, the team showed that VAMPs will still function. In the event that major damage is caused to the system, it fails safely. “It can’t explode, so it’s intrinsically safe,” said Whitesides. Whereas other actuators powered by electricity or combustion could cause damage to humans or their surroundings, loss of vacuum pressure in VAMPs would simply render the actuator motionless.

“These self-healing, bioinspired actuators bring us another step closer to being able to build entirely soft-bodied robots, which may help to bridge the gap between humans and robots and open entirely new application areas in medicine and beyond,” said Wyss Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Boston Children’s Hospital Vascular Biology Program, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

The work was reported June 1 in the journal Advanced Materials Technologies.

Harvard’s Office of Technology Development has filed patents on this and related inventions, and the soft actuator technology has been licensed to Soft Robotics, Inc., a startup launched in 2013 and cofounded by Whitesides. The company is developing robotic grasping systems toward initial applications including picking and packing in unstructured environments — for example, handling fruits and vegetables in produce distribution warehouses. Longer term, this technology can be leveraged to develop products for biomedical applications.

Abstract of Buckling Pneumatic Linear Actuators Inspired by Muscle

The mechanical features of biological muscles are difficult to reproduce completely in synthetic systems. A new class of soft pneumatic structures (vacuum-actuated muscle-inspired pneumatic structures) is described that combines actuation by negative pressure (vacuum), with cooperative buckling of beams fabricated in a slab of elastomer, to achieve motion and demonstrate many features that are similar to that of mammalian muscle.

With technical refinements and further research, such implanted neuroprosthesis systems might help to promote walking ability for at least some patients with post-stroke disability.

Clinically relevant gait improvements

The researchers report their experience with an implanted neuroprosthesis in a 64-year-old man with impaired motion and sensation of his left leg and foot after a hemorrhagic (bleeding) stroke. After thorough evaluation, he underwent surgery to place an implanted pulse generator and intramuscular stimulating electrodes in seven muscles of the hip, knee, and ankle.*

Makowski and colleagues then created a customized electrical stimulation program to activate the muscles, with the goal of restoring a more natural gait pattern. The patient went through extensive training in the researchers’ laboratory for several months after neuroprosthesis placement.

With training without muscle stimulation, gait speed only increased from 0.29 meters per second (m/s) before surgery, to 0.35 m/s after training, a non-significant improvement. But when muscle stimulation was turned on, gait speed increased dramatically: to 0.72 m/s, with “more symmetrical and dynamic gait.”

In addition, the patient was able to walk much farther. When first evaluated, he could walk only 76 meters before becoming fatigued. After training but without stimulation, he could walk about 300 meters (in 16 minutes). With stimulation, the patient’s maximum walking distance increased to more than 1,400 meters (in 41 minutes) with stimulation.

Even though the patient wasn’t walking with stimulation outside the laboratory, his walking ability in daily life improved significantly. He went from “household-only” ambulation to increased walking outside in the neighborhood.

“The therapeutic effect is likely a result of muscle conditioning during stimulated exercise and gait training,” according to the authors. “Persistent use of the device during walking may provide ongoing training that maintains both muscle conditioning and cardiovascular health.”

While the results of this initial experience in a single patient are encouraging, the researchers emphasize that large-scale studies will be needed to demonstrate the wider applicability of a neuroprosthesis for multi-joint control. If the benefits are confirmed, Makowski and colleagues conclude, “daily use of an implanted system could have significant clinical relevance to a portion of the stroke population.”

Abstract of Improving Walking with an Implanted Neuroprosthesis for Hip, Knee, and Ankle Control After Stroke.

Objective: The objective of this work was to quantify the effects of a fully implanted pulse generator to activate or augment actions of hip, knee, and ankle muscles after stroke.

Design: The subject was a 64-year-old man with left hemiparesis resulting from hemorrhagic stroke 21 months before participation. He received an 8-channel implanted pulse generator and intramuscular stimulating electrodes targeting unilateral hip, knee, and ankle muscles on the paretic side. After implantation, a stimulation pattern was customized to assist with hip, knee, and ankle movement during gait.

The subject served as his own concurrent and longitudinal control with and without stimulation. Outcome measures included 10-m walk and 6-minute timed walk to assess gait speed, maximum walk time, and distance to measure endurance, and quantitative motion analysis to evaluate spatial-temporal characteristics. Assessments were repeated under 3 conditions: (1) volitional walking at baseline, (2) volitional walking after training, and (3) walking with stimulation after training.

Results: Volitional gait speed improved with training from 0.29 m/s to 0.35 m/s and further increased to 0.72 m/s with stimulation. Most spatial-temporal characteristics improved and represented more symmetrical and dynamic gait.

Conclusions: These data suggest that a multijoint approach to implanted neuroprostheses can provide clinically relevant improvements in gait after stroke.

This wire frame prototype of a toy aircraft was printed in just 10 minutes, including testing for correct fit, and modified during printing to create the cockpit. The file was updated in the process, and could be used to print a finished model. (credit: Cornell University)

Cornell researchers have developed an interactive prototyping system that prints a wire frame of your design as you design it. You can pause anywhere in the process to test or measure and make needed changes, which will be added to the physical model still in the printer.

In conventional 3-D printing, a nozzle scans across a stage depositing drops of plastic, rising slightly after each pass to build an object in a series of layers. With the On-the-Fly-Print system, the nozzle instead extrudes a rope of quick-hardening plastic to create a wire frame that represents the surface of the solid object described in a computer-aided design (CAD) file and allows the designer to make refinements while printing is in progress.

The printer’s stage can be rotated to present any face of the model facing up; so an airplane fuselage, for example, can be turned on its side to add a wing. There is also a cutter to remove parts of the model, say, to give the airplane a cockpit, and the nozzle can reach through the wire mesh to make changes inside. The system also adds yaw and pitch for five degrees of freedom.

The researchers described the On-the-Fly-Print system in a paper presented at the 2016 ACM Conference for Human Computer Interaction. The work was supported in part by the National Science Foundation and by Autodesk Corp.

Machine-learning algorithms are increasingly used in making important decisions about our lives — such as credit approval, medical diagnoses, and in job applications — but exactly how they work usually remains a mystery. Now Carnegie Mellon University researchers may devised an effective way to improve transparency and head off confusion or possibly legal issues.

CMU’s new Quantitative Input Influence (QII) testing tools can generate “transparency reports” that provide the relative weight of each factor in the final decision, claims Anupam Datta, associate professor of computer science and electrical and computer engineering.

Testing for discrimination

These reports could also be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to determine whether a decision-making system inappropriately discriminated, based on factors like race and gender.

To achieve that, the QII measures considers correlated inputs while measuring influence. For example, consider a system that assists in hiring decisions for a moving company, in which two inputs, gender and the ability to lift heavy weights, are positively correlated with each other and with hiring decisions.

Yet transparency into whether the system actually uses weightlifting ability or gender in making its decisions has substantive implications for determining if it is engaging in discrimination. In this example, the company could keep the weightlifting ability fixed, vary gender, and check whether there is a difference in the decision.

CMU researchers are careful to state in an open-access report on QII (presented at the IEEE Symposium on Security and Privacy, May 23–25, in San Jose, Calif.), that “QII does not suggest any normative definition of fairness. Instead, we view QII as a diagnostic tool to aid fine-grained fairness determinations.”

Is your AI biased?

The Ford Foundation published a controversial blog post last November stating that “while we’re lead to believe that data doesn’t lie — and therefore, that algorithms that analyze the data can’t be prejudiced — that isn’t always true. The origin of the prejudice is not necessarily embedded in the algorithm itself. Rather, it is in the models used to process massive amounts of available data and the adaptive nature of the algorithm. As an adaptive algorithm is used, it can learn societal biases it observes.

“As Professor Alvaro Bedoya, executive director of the Center on Privacy and Technology at Georgetown University, explains, ‘any algorithm worth its salt’ will learn from the external process of bias or discriminatory behavior. To illustrate this, Professor Bedoya points to a hypothetical recruitment program that uses an algorithm written to help companies screen potential hires. If the hiring managers using the program only select younger applicants, the algorithm will learn to screen out older applicants the next time around.”

Influence variables

The QII measures also quantify the joint influence of a set of inputs (such as age and income) on outcomes, and the marginal influence of each input within the set. Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using “principled game-theoretic aggregation” measures that were previously applied to measure influence in revenue division and voting.

Examples of outcomes from transparency reports for two job applicants. Left: “Mr. X” is deemed to be a low income individual, an income classifier learned from the data. This result may be surprising to him: he reports high capital gains ($14k), and only 2.1% of people with capital gains higher than $10k are reported as low income. In fact, he might be led to believe that his classification may be a result of his ethnicity or country of origin. Examining his transparency report in the figure, however, we find that the most influential features that led to his negative classification were Marital Status, Relationship and Education. Right: “Mr. Y” has even higher capital gains than Mr. X. Mr. Y is a 27-year-old, with only Preschool education, and is engaged in fishing. Examination of the transparency report reveals that the most influential factor for negative classification for Mr. Y is his Occupation. Interestingly, his low level of education is not considered very important by this classifier. (credit: Anupam Datta et al./2016 P IEEE S SECUR PRIV)

“To get a sense of these influence measures, consider the U.S. presidential election,” said Yair Zick, a post-doctoral researcher in the CMU Computer Science Department. “California and Texas have influence because they have many voters, whereas Pennsylvania and Ohio have power because they are often swing states. The influence aggregation measures we employ account for both kinds of power.”

The researchers tested their approach against some standard machine-learning algorithms that they used to train decision-making systems on real data sets. They found that the QII provided better explanations than standard associative measures for a host of scenarios they considered, including sample applications for predictive policing and income prediction.

Privacy concerns

But transparency reports could also potentially compromise privacy, so in the paper, the researchers also explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise.

QII is not yet available, but the CMU researchers are seeking collaboration with industrial partners so that they can employ QII at scale on operational machine-learning systems.

Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision-making processes are often opaque—it is difficult to explain why a certain decision was made. We develop a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals (e.g., a loan decision) and groups (e.g., disparate impact based on gender). Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the marginal influence of individual inputs within such a set (e.g., income). Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting. Further, since transparency reports could compromise privacy, we explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise. Our empirical validation with standard machine learning algorithms demonstrates that QII measures are a useful transparency mechanism when black box access to the learning system is available. In particular, they provide better explanations than standard associative measures for a host of scenarios that we consider. Further, we show that in the situations we consider, QII is efficiently approximable and can be made differentially private while preserving accuracy.

A series of studies over two years with rodents exposed to radio frequency radiation (RFR) found low incidences of malignant gliomas (tumors of glial support cells) in the brain and schwannoma tumors in the heart.*

The studies were performed under the auspices of the U.S. National Toxicology Program (NTP).

Potentially preneoplastic (pre-cancer) lesions were also observed in the brain and heart of male rats exposed to RFR, with higher confidence in the association with neoplastic lesions in the heart than the brain.

No biologically significant effects were observed in the brain or heart of female rats regardless of type of radiation.

The NTP notes that the open-access report is a preview and has not been peer-reviewed.**

* The rodents were subjected to whole-body exposure to the two types RFR modulation currently used in U.S. wireless networks — CDMA and GSM — at frequencies of 900 MHz for rats and 1900 MHz for mice, with a total exposure time of approximately 9 hours a day over the course of the day, 7 days/week. The glioma lesions occurred in 2 to 3 percent of the rats and the schwannomas occurred in 1 to 6 percent of the rats.

** The NTP says further details will be published in the peer-reviewed literature later in 2016. The reports are “limited to select findings of concern in the brain and heart and do not represent a complete reporting of all findings from these studies of cell phone RFR,” which will be “reported together with the current findings in two forthcoming NTP peer-reviewed reports, to be available for peer review and public comment by the end of 2017.”

Abstract of Report of Partial Findings from the National Toxicology Program Carcinogenesis Studies of Cell Phone Radiofrequency Radiation

The U.S. National Toxicology Program (NTP) has carried out extensive rodent toxicology and carcinogenesis studies of radiofrequency radiation (RFR) at frequencies and modulations used in the US telecommunications industry. This report presents partial findings from these studies. The occurrences of two tumor types in male Harlan Sprague Dawley rats exposed to RFR, malignant gliomas in the brain and schwannomas of the heart, were considered of particular interest, and are the subject of this report. The findings in this report were reviewed by expert peer reviewers selected by the NTP and National Institutes of Health (NIH). These reviews and responses to comments are included as appendices to this report, and revisions to the current document have incorporated and addressed these comments. Supplemental information in the form of 4 additional manuscripts has or will soon be submitted for publication. These manuscripts describe in detail the designs and performance of the RFR exposure system, the dosimetry of RFR exposures in rats and mice, the results to a series of pilot studies establishing the ability of the animals to thermoregulate during RFR exposures, and studies of DNA damage. Capstick M, Kuster N, Kühn S, Berdinas-Torres V, Wilson P, Ladbury J, Koepke G, McCormick D, Gauger J, Melnick R. A radio frequency radiation reverberation chamber exposure system for rodents Yijian G, Capstick M, McCormick D, Gauger J, Horn T, Wilson P, Melnick RL and Kuster N. Life time dosimetric assessment for mice and rats exposed to cell phone radiation Wyde ME, Horn TL, Capstick M, Ladbury J, Koepke G, Wilson P, Stout MD, Kuster N, Melnick R, Bucher JR, and McCormick D. Pilot studies of the National Toxicology Program’s cell phone radiofrequency radiation reverberation chamber exposure system Smith-Roe SL, Wyde ME, Stout MD, Winters J, Hobbs CA, Shepard KG, Green A, Kissling GE, Tice RR, Bucher JR, Witt KL. Evaluation of the genotoxicity of cell phone radiofrequency radiation in male and female rats and mice following subchronic exposure.

Mice normally freeze in position as a response to fear, as shown here under control condition (center row): fear conditioning induces freezing behavior in response (recall) to exposure to the conditioned stimulus (tone), but the freezing response normally decreases (extinction) following several days of multiple tone exposures (the mice get used to it). However, enhancing release of acetylcholine (blue light) to the amygdala during conditioned fear training resulted in continued freezing behavior 24 hours later and persisted over long periods of time (extinction). In contrast, reducing acetylcholine (yellow light) during the initial training period reduced the freezing behavior (during recall) and led to greater retention of the extinction learning (reduced freezing). (credit: Li Jiang et al./Neuron)

Imagine if people with dementia could enhance good memories or those with post-traumatic stress disorder could wipe out bad memories. A Stony Brook University research team has now taken a step toward that goal by manipulating one of the brain’s natural mechanisms for signaling involved in emotional memory: a neurotransmitter called acetylcholine.

The region of the brain most involved in emotional memory is thought to be the amygdala. Cholinergic neurons that reside in the base of the brain — the same neurons that appear to be affected early in cognitive decline — stimulate release of acetylcholine by neurons in the amygdala, which strengthens emotional memories.

Because fear is a strong and emotionally charged experience, Lorna Role, PhD, Professor and Chair of the Department of Neurobiology and Behavior, and colleagues used a fear-based memory model in mice to test the underlying mechanism of memory and the specific role of acetylcholine in the amygdala.

A step toward reversing post-traumatic stress disorder

Be afraid. Be very afraid. Optogenetic stimulation with blue light. (credit: Deisseroth Laboratory)

To achieve precise control, the team used optogenetics, a research method using light, to stimulate specific populations of cholinergic neurons in the amygdala during the experiments to release acetylcholine. As noted in previous studies reported on KurzweilAI, shining blue (or green) light on neurons treated with light-sensitive membrane proteins stimulates the neurons while shining yellow (or red) light inhibits (blocks) them.

So when the researchers used optogenetics with blue light to increase the amount of acetylcholine released in the amygdala during the formation of a traumatic memory, they found it greatly strengthened fear memory, making the memory last more than twice as long as normal.

But when they decreased acetylcholine signaling (using yellow light) in the amygdala from a traumatic experience — one that normally produces a fear response — they could actually extinguish (wipe out) the memory.

Role said the long-term goal of their research is to find ways — potentially independent of acetylcholine (or drug administration) — to enhance or diminish the strength of good memories and diminish the bad ones.

Their findings are published in the journal Neuron. The research was supported in part by the National Institutes of Health.

We examined the contribution of endogenous cholinergic signaling to the acquisition and extinction of fear- related memory by optogenetic regulation of cholinergic input to the basal lateral amygdala (BLA). Stimulation of cholinergic terminal fields within the BLA in awake-behaving mice during training in a cued fear-conditioning paradigm slowed the extinction of learned fear as assayed by multi-day retention of extinction learning. Inhibition of cholinergic activity during training reduced the acquisition of learned fear behaviors. Circuit mechanisms underlying the behavioral effects of cholinergic signaling in the BLA were assessed by in vivo and ex vivo electrophysiological recording. Photostimulation of endogenous cholinergic input (1) enhances firing of putative BLA principal neurons through activation of acetylcholine receptors (AChRs), (2) enhances glutamatergic synaptic transmission in the BLA, and (3) induces LTP of cortical-amygdala circuits. These studies support an essential role of cholinergic modulation of BLA circuits in the inscription and retention of fear memories.