Washington State University researchers used light to write a highly conducting electrical path in a crystal that can be erased and reconfigured. (Left) A photograph of a sample with four metal contacts. (Right) An illustration of a laser drawing a conductive path between two contacts. (credit: Washington State University)

Washington State University (WSU) physicists have found a way to write an electrical circuit into a crystal, opening up the possibility of transparent, three-dimensional electronics that, like an Etch A Sketch, can be erased and reconfigured.

Ordinarily, a crystal does not conduct electricity. But when the researchers heated up crystal strontium titanate under the specific conditions, the crystal was altered so that light made it conductive. The circuit could be erased by heating it with an optical pen.

Schematic diagram of experiment in writing an electrical circuit into a crystal (credit: Washington State University)

The physicists were able to increase the crystal’s conductivity 1,000-fold. The phenomenon occurred at room temperature.

“It opens up a new type of electronics where you can define a circuit optically and then erase it and define a new one,” said Matt McCluskey, a WSU professor of physics and materials science.

The work was published July 27, 2017 in the open-access on-line journal Scientific Reports. The research was funded by the National Science Foundation.

Abstract of Using persistent photoconductivity to write a low-resistance path in SrTiO3

Materials with persistent photoconductivity (PPC) experience an increase in conductivity upon exposure to light that persists after the light is turned off. Although researchers have shown that this phenomenon could be exploited for novel memory storage devices, low temperatures (below 180 K) were required. In the present work, two-point resistance measurements were performed on annealed strontium titanate (SrTiO3, or STO) single crystals at room temperature. After illumination with sub-gap light, the resistance decreased by three orders of magnitude. This markedly enhanced conductivity persisted for several days in the dark. Results from IR spectroscopy, electrical measurements, and exposure to a 405 nm laser suggest that contact resistance plays an important role. The laser was then used as an “optical pen” to write a low-resistance path between two contacts, demonstrating the feasibility of optically defined, transparent electronics.

Ray Kuzweil, a director of engineering at Google, reveals plans for a future version of Google’s “Smart Reply” machine-learning email software (and more) in a Wired article by Tom Simonite published Wednesday (Aug. 2, 2017).

Running on mobile Gmail and Google Inbox, Smart Reply suggests up to three replies to an email message, saving typing time or giving you ideas for a better reply.

Smarter autocomplete

Kurzweil’s team is now “experimenting with empowering Smart Reply to elaborate on its initial terse suggestions,” Simonite says.

“Tapping a Continue button [in response to an email] might cause ‘Sure I’d love to come to your party!’ to expand to include, for example, ‘Can I bring something?’ He likes the idea of having AI pitch in anytime you’re typing, a bit like an omnipresent, smarter version of Google’s search autocomplete. ‘You could have similar technology to help you compose documents or emails by giving you suggestions of how to complete your sentence,’ Kurzweil says.”

As Simonite notes, Kurzweil’s software is based on his hierarchical theory of intelligence, articulated in Kurzweil’s latest book, How to Create a Mind and in more detail in an arXivpaper by Kurzweil and key members of his team, published in May.

“Kurzweil’s work outlines a path to create a simulation of the human neocortex (the outer layer of the brain where we do much of our thinking) by building a hierarchy of similarly structured components that encode increasingly abstract ideas as sequences,” according to the paper. “Kurzweil provides evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules.”

The paper further explains that Smart Reply previously used a “long short-term memory” (LSTM) network*, “which are much slower than feed-forward networks [used in the new software] for training and inference” because with LSTM, it takes more computation to handle longer sequences of words.

Kurzweil’s team was able to produce email responses of similar quality to LSTM, but using fewer computational resources by training hierarchically connected layers of simulated neurons on clustered numerical representations of text. Essentially, the approach propagates information through a sequence of ever more complex pattern recognizers until the final patterns are matched to optimal responses.

Kona: linguistically fluent software

But underlying Smart Reply is “a system for understanding the meaning of language, according to Kurzweil,” Simonite reports.

“Codenamed Kona, the effort is aiming for nothing less than creating software as linguistically fluent as you or me. ‘I would not say it’s at human levels, but I think we’ll get there,’ Kurzweil says. More applications of Kona are in the works and will surface in future Google products, he promises.”

* The previous sequence-to-sequence (Seq2Seq) framework [described in this paper] uses “recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of word embeddings into representations that depend on the order, and uses a decoder RNN to generate output sequences word by word. …While Seq2Seq models provide a generalized solution, it is not obvious that they are maximally efficient, and training these systems can be slow and complicated.”

Molecules of vinyl cyanide reside in the atmosphere of Titan, Saturn’s largest moon, says NASA. Titan is shown here in an optical (atmosphere) infrared (surface) composite from NASA’s Cassini spacecraft. Titan’s atmosphere is a veritable chemical factory, harnessing the light of the sun and the energy from fast-moving particles that orbit around Saturn to convert simple organic molecules into larger, more complex chemicals. (credit: B. Saxton (NRAO/AUI/NSF); NASA)

NASA researchers have found large quantities (2.8 parts per billion) of acrylonitrile* (vinyl cyanide, C2H3CN) in Titan’s atmosphere that could self-assemble as a sheet of material similar to a cell membrane.

Acrylonitrile (credit: NASA Goddard)

Consider these findings, presented July 28, 2017 in the open-access journal Science Advances, based on data from the ALMA telescope in Chile (and confirming earlier observations by NASA’s Cassini spacecraft):

Azotozome illustration (credit: James Stevenson/Cornell)

1. Researchers have proposed that acrylonitrile molecules could come together as a sheet of material similar to a cell membrane. The sheet could form a hollow, microscopic sphere that they dubbed an “azotosome.”

A bilayer, made of two layers of lipid molecules (credit: Mariana Ruiz Villarreal/CC)

2. The azotosome sphere could serve as a tiny storage and transport container, much like the spheres that biological lipid bilayers can form. The thin, flexible lipid bilayer is the main component of the cell membrane, which separates the inside of a cell from the outside world.

“The ability to form a stable membrane to separate the internal environment from the external one is important because it provides a means to contain chemicals long enough to allow them to interact,” said Michael Mumma, director of the Goddard Center for Astrobiology, which is funded by the NASA Astrobiology Institute.

4. A lake on Titan named Ligeia Mare that could have accumulated enough acrylonitrile to form about 10 million azotosomes in every milliliter (quarter-teaspoon) of liquid. Compare that to roughly a million bacteria per milliliter of coastal ocean water on Earth.

Chemistry in Titan’s atmosphere. Nearly as large as Mars, Titan has a hazy atmosphere made up mostly of nitrogen with a smattering of organic, carbon-based molecules, including methane (CH4) and ethane (C2H6). Planetary scientists theorize that this chemical make-up is similar to Earth’s primordial atmosphere. The conditions on Titan, however, are not conducive to the formation of life as we know it; it’s simply too cold (95 kelvins or -290 degrees Fahrenheit). (credit: ESA)

6. A related open-access study published July 26, 2017 in The Astrophysical Journal Letters notes that Cassini has also made the surprising detection of negatively charged molecules known as “carbon chain anions” in Titan’s upper atmosphere. These molecules are understood to be building blocks towards more complex molecules, and may have acted as the basis for the earliest forms of life on Earth.

“This is a known process in the interstellar medium, but now we’ve seen it in a completely different environment, meaning it could represent a universal process for producing complex organic molecules,” says Ravi Desai of University College London and lead author of the study.

* On Earth, acrylonitrile is used in manufacturing of plastics.

NASA Goddard | A Titan Discovery

Abstract of ALMA detection and astrobiological potential of vinyl cyanide on Titan

Recent simulations have indicated that vinyl cyanide is the best candidate molecule for the formation of cell membranes/vesicle structures in Titan’s hydrocarbon-rich lakes and seas. Although the existence of vinyl cyanide (C2H3CN) on Titan was previously inferred using Cassini mass spectrometry, a definitive detection has been lacking until now. We report the first spectroscopic detection of vinyl cyanide in Titan’s atmosphere, obtained using archival data from the Atacama Large Millimeter/submillimeter Array (ALMA), collected from February to May 2014. We detect the three strongest rotational lines of C2H3CN in the frequency range of 230 to 232 GHz, each with >4σ confidence. Radiative transfer modeling suggests that most of the C2H3CN emission originates at altitudes of ≳200 km, in agreement with recent photochemical models. The vertical column densities implied by our best-fitting models lie in the range of 3.7 × 1013 to 1.4 × 1014 cm−2. The corresponding production rate of vinyl cyanide and its saturation mole fraction imply the availability of sufficient dissolved material to form ~107 cell membranes/cm3 in Titan’s sea Ligeia Mare.

Disney Research has created the first shared, combined augmented/mixed-reality experience, replacing first-person head-mounted displays or handheld devices with a mirrored image on a large screen — allowing people to share the magical experience as a group.

Sit on Disney Research’s Magic Bench and you may see an elephant hand you a glowing orb, hear its voice, and feel it sit down next to you, for example. Or you might get rained on and find yourself underwater.

How it works

Flowchart of the Magic Bench installation (credit: Disney Research)

People seated on the Magic Bench can see themselves on a large video display in front of them. The scene is reconstructed using a combined depth sensor/video camera (Microsoft Kinect) to image participants, bench, and surroundings. An image of the participants is projected on a large screen, allowing them to occupy the same 3D space as a computer-generated character or object. The system can also infer participants’ gaze.*

Speakers and haptic sensors built into the bench add to the experience (by vibrating the bench when the elephant sits down in this example).

The research team will present and demonstrate the Magic Bench at SIGGRAPH 2017, the Computer Graphics and Interactive Techniques Conference, which began Sunday, July 30 in Los Angeles.

* To eliminate depth shadows that occur in areas where the depth sensor has no corresponding line of sight with the color camera, a modified algorithm creates a 2D backdrop, according to the researchers. The 3D and 2D reconstructions are positioned in virtual space and populated with 3D characters and effects in such a way that the resulting real-time rendering is a seamless composite, fully capable of interacting with virtual physics, light, and shadows.

DisneyResearchHub | Magic Bench

Abstract of Magic Bench

Mixed Reality (MR) and Augmented Reality (AR) create exciting opportunities to engage users in immersive experiences, resulting in natural human-computer interaction. Many MR interactions are generated around a first-person Point of View (POV). In these cases, the user directs to the environment, which is digitally displayed either through a head-mounted display or a handheld computing device. One drawback of such conventional AR/MR platforms is that the experience is user-specific. Moreover, these platforms require the user to wear and/or hold an expensive device, which can be cumbersome and alter interaction techniques. We create a solution for multi-user interactions in AR/MR, where a group can share the same augmented environment with any computer generated (CG) asset and interact in a shared story sequence through a third-person POV. Our approach is to instrument the environment leaving the user unburdened of any equipment, creating a seamless walk-up-and-play experience. We demonstrate this technology in a series of vignettes featuring humanoid animals. Participants can not only see and hear these characters, they can also feel them on the bench through haptic feedback. Many of the characters also interact with users directly, either through speech or touch. In one vignette an elephant hands a participant a glowing orb. This demonstrates HCI in its simplest form: a person walks up to a computer, and the computer hands the person an object.

“Ribocomputing devices” ( yellow) developed by a team at the Wyss Institute can now be used by synthetic biologists to sense and interpret multiple signals in cells and logically instruct their ribosomes (blue and green) to produce different proteins. (credit: Wyss Institute at Harvard University)

Synthetic biologists at Harvard’s Wyss Institute for Biologically Inspired Engineering and associates have developed a living programmable “ribocomputing” device based on networks of precisely designed, self-assembling synthetic RNAs (ribonucleic acid). The RNAs can sense multiple biosignals and make logical decisions to control protein production with high precision.

As reported in Nature, the synthetic biological circuits could be used to produce drugs, fine chemicals, and biofuels or detect disease-causing agents and release therapeutic molecules inside the body. The low-cost diagnostic technologies may even lead to nanomachines capable of hunting down cancer cells or switching off aberrant genes.

Biological logic gates

Similar to a digital circuit, these synthetic biological circuits can process information and make logic-guided decisions, using basic logic operations — AND, OR, and NOT. But instead of detecting voltages, the decisions are based on specific chemicals or proteins, such as toxins in the environment, metabolite levels, or inflammatory signals. The specific ribocomputing parts can be readily designed on a computer.

E. coli bacteria engineered to be ribocomputing devices output a green-glowing protein when they detect a specific set of programmed RNA molecules as input signals (credit: Harvard University)

The research was performed with E. coli bacteria, which regulate the expression of a fluorescent (glowing) reporter protein when the bacteria encounter a specific complex set of intra-cellular stimuli. But the researchers believe ribocomputing devices can work with other host organisms or in extracellular settings.

Previous synthetic biological circuits have only been able to sense a handful of signals, giving them an incomplete picture of conditions in the host cell. They are also built out of different types of molecules, such as DNAs, RNAs, and proteins, that must find, bind, and work together to sense and process signals. Identifying molecules that cooperate well with one another is difficult and makes development of new biological circuits a time-consuming and often unpredictable process.

Brain-like neural networks next

Ribocomputing devices could also be freeze-dried on paper, leading to paper-based biological circuits, including diagnostics that can sense and integrate several disease-relevant signals in a clinical sample, the researchers say.

The next stage of research will focus on the use of RNA “toehold” technology* to produce neural networks within living cells — circuits capable of analyzing a range of excitatory and inhibitory inputs, averaging them, and producing an output once a particular threshold of activity is reached. (Similar to how a neuron averages incoming signals from other neurons.)

Ultimately, researchers hope to induce cells to communicate with one another via programmable molecular signals, forming a truly interactive, brain-like network, according to lead author Alex Green, an assistant professor at Arizona State University’s Biodesign Institute.

Wyss Institute Core Faculty member Peng Yin, Ph.D., who led the study, is also Professor of Systems Biology at Harvard Medical School.

The study was funded by the Wyss Institute’s Molecular Robotics Initiative, a Defense Advanced Research Projects Agency (DARPA) Living Foundries grant, and grants from the National Institute of Health (NIH), the Office of Naval Research (ONR), the National Science Foundation (NSF) and the Defense Threat Reduction Agency (DTRA).

* The team’s approach evolved from its previous development of “toehold switches” in 2014 — programmable hairpin-like nano-structures made of RNA. In principle, RNA toehold wwitches can control the production of a specific protein: when a desired complementary “trigger” RNA, which can be part of the cell’s natural RNA repertoire, is present and binds to the toehold switch, the hairpin structure breaks open. Only then will the cell’s ribosomes get access to the RNA and produce the desired protein.

Synthetic biology aims to develop engineering-driven approaches to the programming of cellular functions that could yield transformative technologies. Synthetic gene circuits that combine DNA, protein, and RNA components have demonstrated a range of functions such as bistability, oscillation, feedback, and logic capabilities. However, it remains challenging to scale up these circuits owing to the limited number of designable, orthogonal, high-performance parts, the empirical and often tedious composition rules, and the requirements for substantial resources for encoding and operation. Here, we report a strategy for constructing RNA-only nanodevices to evaluate complex logic in living cells. Our ‘ribocomputing’ systems are composed of de-novo-designed parts and operate through predictable and designable base-pairing rules, allowing the effective in silico design of computing devices with prescribed configurations and functions in complex cellular environments. These devices operate at the post-transcriptional level and use an extended RNA transcript to co-localize all circuit sensing, computation, signal transduction, and output elements in the same self-assembled molecular complex, which reduces diffusion-mediated signal losses, lowers metabolic cost, and improves circuit reliability. We demonstrate that ribocomputing devices in Escherichia coli can evaluate two-input logic with a dynamic range up to 900-fold and scale them to four-input AND, six-input OR, and a complex 12-input expression (A1 AND A2 AND NOT A1*) OR (B1 AND B2 AND NOT B2*) OR (C1 AND C2) OR (D1 AND D2) OR (E1 AND E2). Successful operation of ribocomputing devices based on programmable RNA interactions suggests that systems employing the same design principles could be implemented in other host organisms or in extracellular settings.

When you use smartphone AI apps like Siri, you’re dependent on the cloud for a lot of the processing — limited by your connection speed. But what if your smartphone could do more of the processing directly on your device — allowing for smarter, faster apps?

MIT scientists have taken a step in that direction with a new way to enable artificial-intelligence systems called convolutional neural networks (CNNs) to run locally on mobile devices. (CNN’s are used in areas such as autonomous driving, speech recognition, computer vision, and automatic translation.) Neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

The new MIT analytic method can determine how much power a neural network will actually consume when run on a particular type of hardware. The researchers used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The new CNN designs are also optimized to run on an energy-efficient computer chip optimized for neural networks that the researchers developed in 2016.

Reducing energy consumption

The new MIT software method uses “energy-aware pruning” — meaning they reduce a neural networks’ power consumption by cutting out the layers of the network that contribute very little to a neural network’s final output and consume the most energy.

Associate professor of electrical engineering and computer science Vivienne Sze and colleagues describe the work in an open-access paper they’re presenting this week (of July 24, 2017) at the Computer Vision and Pattern Recognition Conference. They report that the methods offered up to 73 percent reduction in power consumption over the standard implementation of neural networks — 43 percent better than the best previous method.

Meanwhile, another MIT group at the Computer Science and Artificial Intelligence Laboratory has designed a hardware approach to reduce energy consumption and increase computer-chip processing speed for specific apps, using “cache hierarchies.” (“Caches” are small, local memory banks that store data that’s frequently used by computer chips to cut down on time- and energy-consuming communication with off-chip memory.)**

The researchers tested their system on a simulation of a chip with 36 cores, or processing units. They found that compared to its best-performing predecessors, the system increased processing speed by 20 to 30 percent while reducing energy consumption by 30 to 85 percent. They presented the new system, dubbed Jenga, in an open-access paper at the International Symposium on Computer Architecture earlier in July 2017.

Better batteries — or maybe, no battery?

Another solution to better mobile AI is improving rechargeable batteries in cell phones (and other mobile devices), which have limited charge capacity and short lifecycles, and perform poorly in cold weather.

Recently, DARPA-funded researchers from the University of Houston (and at the University of California-San Diego and Northwestern University) have discovered that quinones — an inexpensive, earth-abundant and easily recyclable material that is low-cost and nonflammable — can address current battery limitations.

“One of these batteries, as a car battery, could last 10 years,” said Yan Yao, associate professor of electrical and computer engineering. In addition to slowing the deterioration of batteries for vehicles and stationary electricity storage batteries, it also would make battery disposal easier because the material does not contain heavy metals. The research is described in Nature Materials.

The first battery-free cellphone that can send and receive calls using only a few microwatts of power. (credit: Mark Stone/University of Washington)

But what if we eliminated batteries altogether? University of Washington researchers have invented a cellphone that requires no batteries. Instead, it harvests 3.5 microwatts of power from ambient radio signals, light, or even the vibrations of a speaker.

The UW researchers demonstrated how to harvest this energy from ambient radio signals transmitted by a WiFi base station up to 31 feet away. “You could imagine in the future that all cell towers or Wi-Fi routers could come with our base station technology embedded in it,” said co-author Vamsi Talla, a former UW electrical engineering doctoral student and Allen School research associate. “And if every house has a Wi-Fi router in it, you could get battery-free cellphone coverage everywhere.”

A cellphone CPU (computer processing unit) typically requires several watts or more (depending on the app), so we’re not quite there yet. But that power requirement could one day be sufficiently reduced by future special-purpose chips and MIT’s optimized algorithms.

* Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation. With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3.7x and 1.6x, respectively, with less than 1% top-5 accuracy loss.

** The software reallocates cache access on the fly to reduce latency (delay), based on the physical locations of the separate memory banks that make up the shared memory cache. If multiple cores are retrieving data from the same DRAM [memory] cache, this can cause bottlenecks that introduce new latencies. So after Jenga has come up with a set of cache assignments, cores don’t simply dump all their data into the nearest available memory bank; instead, Jenga parcels out the data a little at a time, then estimates the effect on bandwidth consumption and latency.

*** The stumbling block, Yao said, has been the anode, the portion of the battery through which energy flows. Existing anode materials are intrinsically structurally and chemically unstable, meaning the battery is only efficient for a relatively short time. The differing formulations offer evidence that the material is an effective anode for both acid batteries and alkaline batteries, such as those used in a car, as well as emerging aqueous metal-ion batteries.

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?

The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?

This issue is pressing for several reasons. First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death, because that upload wouldn’t be a conscious being.

In addition, if AI eventually out-thinks us yet lacks consciousness, there would still be an important sense in which we humans are superior to machines; it feels like something to be us. But the smartest beings on the planet wouldn’t be conscious or sentient.

A lot hangs on the issue of machine consciousness, then. Yet neuroscientists are far from understanding the basis of consciousness in the brain, and philosophers are at least equally far from a complete explanation of the nature of consciousness.

A test for machine consciousness

So what can be done? We believe that we do not need to define consciousness formally, understand its philosophical nature or know its neural basis to recognize indications of consciousness in AIs. Each of us can grasp something essential about consciousness, just by introspecting; we can all experience what it feels like, from the inside, to exist.

(credit: Gerd Altmann/Pixabay)

Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.

One of the most compelling indications that normally functioning humans experience consciousness, although this is not often noted, is that nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness. Such ideas include scenarios like minds switching bodies (as in the film Freaky Friday); life after death (including reincarnation); and minds leaving “their” bodies (for example, astral projection or ghosts). Whether or not such scenarios have any reality, they would be exceedingly difficult to comprehend for an entity that had no conscious experience whatsoever. It would be like expecting someone who is completely deaf from birth to appreciate a Bach concerto.

Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self.

At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as “the hard problem of consciousness” would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.

Consider this example, which illustrates the idea: Suppose we find a planet that has a highly sophisticated silicon-based life form (call them “Zetas”). Scientists observe them and ponder whether they are conscious beings. What would be convincing proof of consciousness in this species? If the Zetas express curiosity about whether there is an afterlife or ponder whether they are more than just their physical bodies, it would be reasonable to judge them conscious. If the Zetas went so far as to pose philosophical questions about consciousness, the case would be stronger still.

There are also nonverbal behaviors that could indicate Zeta consciousness such as mourning the dead, religious activities or even turning colors in situations that correlate with emotional challenges, as chromatophores do on Earth. Such behaviors could indicate that it feels like something to be a Zeta.

The death of the mind of the fictional HAL 9000 AI computer in Stanley Kubrick’s 2001: A Space Odyssey provides another illustrative example. The machine in this case is not a humanoid robot as in most science fiction depictions of conscious machines; it neither looks nor sounds like a human being (a human did supply HAL’s voice, but in an eerily flat way). Nevertheless, the content of what it says as it is deactivated by an astronaut — specifically, a plea to spare it from impending “death” — conveys a powerful impression that it is a conscious being with a subjective experience of what is happening to it.

Could such indicators serve to identify conscious AIs on Earth? Here, a potential problem arises. Even today’s robots can be programmed to make convincing utterances about consciousness, and a truly superintelligent machine could perhaps even use information about neurophysiology to infer the presence of consciousness in humans. If sophisticated but non-conscious AIs aim to mislead us into believing that they are conscious for some reason, their knowledge of human consciousness could help them do so.

We can get around this though. One proposed technique in AI safety involves “boxing in” an AI—making it unable to get information about the world or act outside of a circumscribed domain, that is, the “box.” We could deny the AI access to the internet and indeed prohibit it from gaining any knowledge of the world, especially information about conscious experience and neuroscience.

(credit: Gerd Altmann/Pixabay)

Some doubt a superintelligent machine could be boxed in effectively — it would find a clever escape. We do not anticipate the development of superintelligence over the next decade, however. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough administer the test.

An ACT resembles Alan Turing’s celebrated test for intelligence, because it is entirely based on behavior — and, like Turing’s, it could be implemented in a formalized question-and-answer format. (An ACT could also be based on an AI’s behavior or on that of a group of AIs.)

But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the machine. By contrast, an ACT is intended to do exactly the opposite; it seeks to reveal a subtle and elusive property of the machine’s mind. Indeed, a machine might fail the Turing test because it cannot pass for human, but pass an ACT because it exhibits behavioral indicators of consciousness.

This is the underlying basis of our ACT proposal. It should be said, however, that the applicability of an ACT is inherently limited. An AI could lack the linguistic or conceptual ability to pass the test, like a nonhuman animal or an infant, yet still be capable of experience. So passing an ACT is sufficient but not necessary evidence for AI consciousness — although it is the best we can do for now. It is a first step toward making machine consciousness accessible to objective investigations.

So, back to the superintelligent AI in the “box” — we watch and wait. Does it begin to philosophize about minds existing in addition to bodies, like Descartes? Does it dream, as in Isaac Asimov’s Robot Dreams? Does it express emotion, like Rachel in Blade Runner?Can it readily understand the human concepts that are grounded in our internal conscious experiences, such as those of the soul or atman?

The age of AI will be a time of soul-searching — both of ours, and for theirs.

Originally published in Scientific American, July 19, 2017

Susan Schneider, PhD, is a professor of philosophy and cognitive science at the University of Connecticut, a researcher at YHouse, Inc., in New York, a member of the Ethics and Technology Group at Yale University and a visiting member at the Institute for Advanced Study at Princeton. Her books include The Language of Thought, Science Fiction and Philosophy, and The Blackwell Companion to Consciousness (with Max Velmans). She is featured in the new film,Supersapiens, the Rise of the Mind.

Edwin L. Turner, PhD, is a professor of Astrophysical Sciences at Princeton University, an Affiliate Scientist at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo, a visiting member in the Program in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, and a co-founding Board of Directors member of YHouse, Inc. Recently he has been an active participant in the Breakthrough Starshot Initiative. He has taken an active interest in artificial intelligence issues since working in the AI Lab at MIT in the early 1970s.

In the new film Supersapiens, writer-director Markus Mooslechner raises a core question: As artificial intelligence rapidly blurs the boundaries between man and machine, are we witnessing the rise of a new human species?

“Humanity is facing a turning point — the next evolution of the human mind,” notes Mooslechner. “Will this evolution be a hybrid of man and machine, where artificial intelligence forces the emergence of a new human species? Or will a wave of new technologists, who frame themselves as ‘consciousness-hackers,’ become the future torch-bearers, using technology not to replace the human mind, but rather awaken within it powers we have always possessed — enlightenment at the push of a button?”

“It’s not obvious to me that a replacement of our species by our own technological creation would necessarily be a bad thing,” says ethologist-evolutionary biologist-author Dawkins in the film.

Supersapiens in a Terra Mater Factual Studios production. Executive Producers are Joanne Reay and Walter Koehler. Distribution is to be announced.

Workers can access training videos, images annotated with instructions, or quality assurance checklists, for example, or invite others to “see what you see” through a live video stream so you can collaborate and troubleshoot in real-time.

AGCO workers use Glass to see assembly instructions, make reports and get remote video support. (credit: X)

Glass EE enables workers to scan a machine’s serial number to instantly bring up a manual, photo, or video they may need to build a tractor. (credit: AGCO)

Significant improvements

The new “Glass 2.0″ design makes significant improvements over the original Glass, according to Jay Kothari, project lead on the Glass enterprise team, as reported by Wired. It’s accessible for those who wear prescription lenses. A release switch allows for removing the “Glass Pod” electronics part from the frame for use with safety glasses for the factory floor. EE also has faster WiFi, faster processing, extended battery life, an 8-megapixel camera (up from 5), and a (much-requested) red light to indicate recording is in process.

Using Glass with Augmedix, doctors and nurses at Dignity Health can focus on patient care rather than record keeping. (credit: X)

But uses are not limited to factories. EE exclusive distributor Glass Partners also offers Glass devices, specialized software solutions, and ongoing support for such applications as Augmedix, a documentation automation platform powered by human experts and software, which frees physicians from computer work (“Glass has “brought the joys of medicine back to my doctors,” says Albert Chan, M.D., Sutter Health), and swyMed, which gives medical care teams the ability to reliably connect to doctors for real-time telemedicine.

And there are even (carefully targeted) uses for non-workers: Aira provides instant access to information to blind and low-vision people.

Electric fields can be used to guide transplanted human neural stem cells — cells that can develop into various brain tissues — to repair brain damage in specific areas of the brain, scientists at the University of California, Davis have discovered.

But the problem is that neural stem cells are naturally only found deep in the brain — in the hippocampus and the subventricular zone. To repair damage to the outer layers of the brain (the cortex), they would have to migrate a significant distance in the much larger human brain.

Could electric fields be used to help the stem cells migrate that distance? To find out, the researchers placed human neural stem cells in the rostral migration stream (a pathway in the rat brain that carries cells toward the olfactory bulb, which governs the animal’s sense of smell). Cells move easily along this pathway because they are carried by the flow of cerebrospinal fluid, guided by chemical signals.

But by applying an electric field within the rat’s brain, the researchers found they could get the transplanted stem cells to reverse direction and swim “upstream” against the fluid flow. Once arrived, the transplanted stem cells stayed in their new locations weeks or months after treatment, and with indications of differentiation (forming into different types of neural cells).

Additional authors on the paper are at Ren Ji Hospital, Shanghai Jiao Tong University, and Shanghai Institute of Head Trauma in China and at Aaken Laboratories, Davis. The work was supported by the California Institute for Regenerative Medicine with additional support from NIH, NSF, and Research to Prevent Blindness Inc.

Abstract of Electrical Guidance of Human Stem Cells in the Rat Brain

Limited migration of neural stem cells in adult brain is a roadblock for the use of stem cell therapies to treat brain diseases and injuries. Here, we report a strategy that mobilizes and guides migration of stem cells in the brain in vivo. We developed a safe stimulation paradigm to deliver directional currents in the brain. Tracking cells expressing GFP demonstrated electrical mobilization and guidance of migration of human neural stem cells, even against co-existing intrinsic cues in the rostral migration stream. Transplanted cells were observed at 3 weeks and 4 months after stimulation in areas guided by the stimulation currents, and with indications of differentiation. Electrical stimulation thus may provide a potential approach to facilitate brain stem cell therapies.

People who drink around three cups of coffee a day may live longer than non-coffee drinkers, a landmark study has found.

The findings — published in the journal Annals of Internal Medicine — come from the largest study of its kind, in which scientists analyzed data from more than half a million people across 10 European countries to explore the effect of coffee consumption on risk of mortality.

“We found that higher coffee consumption was associated with a lower risk of death from any cause, and specifically for circulatory diseases, and digestive diseases,” said lead author Marc Gunter of the IARC and formerly at Imperial’s School of Public Health. “Importantly, these results were similar across all of the 10 European countries, with variable coffee drinking habits and customs. Our study also offers important insights into the possible mechanisms for the beneficial health effects of coffee.”

Healthier livers, better glucose control

Using data from the EPIC study (European Prospective Investigation into Cancer and Nutrition), the group analysed data from 521,330 people from over the age of 35 from 10 EU countries, including the UK, France, Denmark and Italy. People’s diets were assessed using questionnaires and interviews, with the highest level of coffee consumption (by volume) reported in Denmark (900 mL per day) and lowest in Italy (approximately 92 mL per day). Those who drank more coffee were also more likely to be younger, to be smokers, drinkers, eat more meat and less fruit and vegetables.

After 16 years of follow up, almost 42,000 people in the study had died from a range of conditions including cancer, circulatory diseases, heart failure and stroke. Following careful statistical adjustments for lifestyle factors such as diet and smoking, the researchers found that the group with the highest consumption of coffee had a lower risk for all causes of death, compared to those who did not drink coffee.

They found that decaffeinated coffee had a similar effect.

In a subset of 14,000 people, they also analyzed metabolic biomarkers, and found that coffee drinkers may have healthier livers overall and better glucose control than non-coffee drinkers.

According to the group, more research is needed to find out which of the compounds in coffee may be giving a protective effect or potentially benefiting health.* Other avenues of research to explore could include intervention studies, looking at the effect of coffee drinking on health outcomes.

However, Gunter noted that “due to the limitations of observational research, we are not at the stage of recommending people to drink more or less coffee. That said, our results suggest that moderate coffee drinking is not detrimental to your health, and that incorporating coffee into your diet could have health benefits.”

The study was funded by the European Commission Directorate General for Health and Consumers and the IARC.

* Coffee contains a number of compounds that can interact with the body, including caffeine, diterpenes and antioxidants, and the ratios of these compounds can be affected by the variety of methods used to prepare coffee.

Abstract of Coffee Drinking and Mortality in 10 European Countries: A Multinational Cohort Study

Background: The relationship between coffee consumption and mortality in diverse European populations with variable coffee preparation methods is unclear.

Objective: To examine whether coffee consumption is associated with all-cause and cause-specific mortality.

Limitations: Reverse causality may have biased the findings; however, results did not differ after exclusion of participants who died within 8 years of baseline. Coffee-drinking habits were assessed only once.

Conclusion:

Coffee drinking was associated with reduced risk for death from various causes. This relationship did not vary by country.

Primary Funding Source:

European Commission Directorate-General for Health and Consumers and International Agency for Research on Cancer.

Abstract of Association of Coffee Consumption With Total and Cause-Specific Mortality Among Nonwhite Populations

Background: Coffee consumption has been associated with reduced risk for death in prospective cohort studies; however, data in nonwhites are sparse.

Objective: To examine the association of coffee consumption with risk for total and cause-specific death.

Design: The MEC (Multiethnic Cohort), a prospective population-based cohort study established between 1993 and 1996.

Brain-wide activity in a zebrafish when it sees and tries to pursue prey (credit: Ehud Isacoff lab/UC Berkeley)

Imagine replacing a damaged eye with a window directly into the brain — one that communicates with the visual part of the cerebral cortex by reading from a million individual neurons and simultaneously stimulating 1,000 of them with single-cell accuracy, allowing someone to see again.

That’s the goal of a $21.6 million DARPA award to the University of California, Berkeley (UC Berkeley), one of six organizations funded by DARPA’s Neural Engineering System Design program announced this week to develop implantable, biocompatible neural interfaces that can compensate for visual or hearing deficits.*

The UCB researchers ultimately hope to build a device for use in humans. But the researchers’ goal during the four-year funding period is more modest: to create a prototype to read and write to the brains of model organisms — allowing for neural activity and behavior to be monitored and controlled simultaneously. These organisms include zebrafish larvae, which are transparent, and mice, via a transparent window in the skull.

UC Berkeley | Brain activity as a zebrafish stalks its prey

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said project leader Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

How to read/write the brain

To communicate with the brain, the team will first insert a gene into neurons that makes fluorescent proteins, which flash when a cell fires an action potential. This will be accompanied by a second gene that makes a light-activated “optogenetic” protein, which stimulates neurons in response to a pulse of light.

Peering into a mouse brain with a light field microscope to capture live neural activity of hundreds of individual neurons in a 3D section of tissue at video speed (30 Hz) (credit: The Rockefeller University)

To read, the team is developing a miniaturized “light field microscope.”** Mounted on a small window in the skull, it peers through the surface of the brain to visualize up to a million neurons at a time at different depths and monitor their activity.***

This microscope is based on the revolutionary “light field camera,” which captures light through an array of lenses and reconstructs images computationally in any focus.

A holographic projection created by a spatial light modulator would illuminate (“write”) one set of neurons at one depth — those patterned by the letter a, for example — and simultaneously illuminate other sets of neurons at other depths (z level) or in regions of the visual cortex, such as neurons with b or c patterns. That creates three-dimensional holograms that can light up hundreds of thousands of neurons at multiple depths, just under the cortical surface. (credit: Valentina Emiliani/University of Paris, Descartes)

The combined read-write function will eventually be used to directly encode perceptions into the human cortex — inputting a visual scene to enable a blind person to see. The goal is to eventually enable physicians to monitor and activate thousands to millions of individual human neurons using light.

Isacoff, who specializes in using optogenetics to study the brain’s architecture, can already successfully read from thousands of neurons in the brain of a larval zebrafish, using a large microscope that peers through the transparent skin of an immobilized fish, and simultaneously write to a similar number.

The team will also develop computational methods that identify the brain activity patterns associated with different sensory experiences, hoping to learn the rules well enough to generate “synthetic percepts” — meaning visual images representing things being touched — by a person with a missing hand, for example.

The brain team includes ten UC Berkeley faculty and researchers from Lawrence Berkeley National Laboratory, Argonne National Laboratory, and the University of Paris, Descartes.

* In future articles, KurzweilAI will cover the other research projects announced by DARPA’s Neural Engineering System Design program, which is part of the U.S. NIH Brain Initiative.

** Light penetrates only the first few hundred microns of the surface of the brain’s cortex, which is the outer wrapping of the brain responsible for high-order mental functions, such as thinking and memory but also interpreting input from our senses. This thin outer layer nevertheless contains cell layers that represent visual and touch sensations.

Jack Gallant | Movie reconstruction from human brain activity

Team member Jack Gallant, a UC Berkeley professor of psychology, has shown that its possible to interpret what someone is seeing solely from measured neural activity in the visual cortex.

*** Developed by another collaborator, Valentina Emiliani at the University of Paris, Descartes, the light-field microscope and spatial light modulator will be shrunk to fit inside a cube one centimeter, or two-fifths of an inch, on a side to allow for being carried comfortably on the skull. During the next four years, team members will miniaturize the microscope, taking advantage of compressed light field microscopy developed by Ren Ng to take images with a flat sheet of lenses that allows focusing at all depths through a material. Several years ago, Ng, now a UC Berkeley assistant professor of electrical engineering and computer sciences, invented the light field camera.

As detailed in a paper to be presented Aug. 2 at SIGGRAPH 2017, the team successfully generated a highly realistic video of former president Barack Obama talking about terrorism, fatherhood, job creation and other topics, using audio clips of those speeches and existing weekly video addresses in which he originally spoke on a different topic decades ago.

Realistic audio-to-video conversion has practical applications like improving video conferencing for meetings (streaming audio over the internet takes up far less bandwidth than video, reducing video glitches), or holding a conversation with a historical figure in virtual reality, said Ira Kemelmacher-Shlizerman, an assistant professor at the UW’s Paul G. Allen School of Computer Science & Engineering.

This beats previous audio-to-video conversion processes, which have involved filming multiple people in a studio saying the same sentences over and over to try to capture how a particular sound correlates to different mouth shapes, which is expensive, tedious and time-consuming. The new machine learning tool may also help overcome the “uncanny valley” problem, which has dogged efforts to create realistic video from audio.

How to do it

A neural network first converts the sounds from an audio file into basic mouth shapes. Then the system grafts and blends those mouth shapes onto an existing target video and adjusts the timing to create a realistic, lip-synced video of the person delivering the new speech. (credit: University of Washington)

1. Find or record a video of the person (or use video chat tools like Skype to create a new video) for the neural network to learn from. There are millions of hours of video that already exist from interviews, video chats, movies, television programs and other sources, the researchers note. (Obama was chosen because there were hours of presidential videos in the public domain.)

2. Train the neural network to watch videos of the person and translate different audio sounds into basic mouth shapes.

3. The system then uses the audio of an individual’s speech to generate realistic mouth shapes, which are then grafted onto and blended with the head of that person. Use a small time shift to enable the neural network to anticipate what the person is going to say next.

4. Currently, the neural network is designed to learn on one individual at a time, meaning that Obama’s voice — speaking words he actually uttered — is the only information used to “drive” the synthesized video. Future steps, however, include helping the algorithms generalize across situations to recognize a person’s voice and speech patterns with less data, with only an hour of video to learn from, for instance, instead of 14 hours.

Fakes of fakes

So the obvious question is: Can you use someone else’s voice on a video (assuming enough videos)? The researchers said they decided against going down the path, but they didn’t say it was impossible.

Even more pernicious: the original video person’s words (not just the voice) could be faked using Princeton/Adobe’s “VoCo” software (when available) — simply by editing a text transcript of their voice recording — or the fake voice itself could be modified.

Or Disney Research’s FaceDirector could be used to edit recorded substitute facial expressions (along with the fake voice) into the video.

However, by reversing the process — feeding video into the neural network instead of just audio — one could also potentially develop algorithms that could detect whether a video is real or manufactured, the researchers note.

The research was funded by Samsung, Google, Facebook, Intel, and the UW Animation Research Labs. You can contact the research team at audiolipsync@cs.washington.edu.

Abstract of Synthesizing Obama: Learning Lip Sync from Audio

Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.

Researchers at Carnegie Mellon University’s Robotics Institute have developed a system that can detect and understand body poses and movements of multiple people from a video in real time — including, for the first time, the pose of each individual’s fingers.

The ability to recognize finger or hand poses, for instance, will make it possible for people to interact with computers in new and more natural ways, such as simply pointing at things.

That will also allow robots to perceive you’re doing, what moods you’re in, and whether you can be interrupted, for example. Your self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring your body language. The technology could also be used for behavioral diagnosis and rehabilitation for conditions such as autism, dyslexia, and depression, the researchers say.

This new method was developed at CMU’s NSF-funded Panoptic Studio, a two-story dome embedded with 500 video cameras, but the researchers can now do the same thing with a single camera and laptop computer.

Tracking multiple people in real time, particularly in social situations where they may be in contact with each other, presents a number of challenges. Sheikh and his colleagues took a bottom-up approach, which first localizes all the body parts in a scene — arms, legs, faces, etc. — and then associates those parts with particular individuals.

Sheikh and his colleagues will present reports on their multiperson and hand-pose detection methods at CVPR 2017, the Computer Vision and Pattern Recognition Conference, July 21–26 in Honolulu.

A radical new 3D chip that combines computation and data storage in vertically stacked layers — allowing for processing and storing massive amounts of data at high speed in future transformative nanosystems — has been designed by researchers at Stanford University and MIT.

The new 3D-chip design* replaces silicon with carbon nanotubes (sheets of 2-D graphene formed into nanocylinders) and integrates resistive random-access memory (RRAM) cells.

Carbon-nanotube field-effect transistors (CNFETs) are an emerging transistor technology that can scale beyond the limits of silicon MOSFETs (conventional chips), and promise an order-of-magnitude improvement in energy-efficient computation. However, experimental demonstrations of CNFETs so far have been small-scale and limited to integrating only tens or hundreds of devices (see earlier 2015 Stanford research, “Skyscraper-style carbon-nanotube chip design…”).

The researchers integrated more than 1 million RRAM cells and 2 million carbon-nanotube field-effect transistors in the chip, making it the most complex nanoelectronic system ever made with emerging nanotechnologies, according to the researchers. RRAM is an emerging memory technology that promises high-capacity, non-volatile data storage, with improved speed, energy efficiency, and density, compared to dynamic random-access memory (DRAM).

Instead of requiring separate components, the RRAM cells and carbon nanotubes are built vertically over one another, creating a dense new 3D computer architecture** with interleaving layers of logic and memory. By using ultradense through-chip vias (electrical interconnecting wires passing between layers), the high delay with conventional wiring between computer components is eliminated.

The new 3D nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce “highly processed” information. “Such complex nanoelectronic systems will be essential for future high-performance, highly energy-efficient electronic systems,” the researchers say.

The new chip design aims to replace current chip designs, which separate computing and data storage, resulting in limited-speed connections.

Separate 2D chips have been required because “building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” explains lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT and lead author of a paper published July 5, 2017 in the journal Nature. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

Instead, carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures: below 200 C. “This means they can be built up in layers without harming the circuits beneath,” says Shulaker.

Overcoming communication and computing bottlenecks

As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on increasingly miniaturized chips, there is not enough room to place chips side-by-side.

At the same time, embedded intelligence in areas ranging from autonomous driving to personalized medicine is now generating huge amounts of data, but silicon transistors are no longer improving at the historic rate that they have for decades.

Instead, three-dimensional integration is the most promising approach to continue the technology-scaling path set forth by Moore’s law, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

Three-dimensional integration “leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” he says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

The new 3D design provides several benefits for future computing systems, including:

Logic circuits made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon.

The dense through-chip vias (wires) can enable vertical connectivity that is 1,000 times more dense than conventional packaging and chip-stacking solutions allow, which greatly improves the data communication bandwidth between vertically stacked functional layers. For example, each sensor in the top layer can connect directly to its respective underlying memory cell with an inter-layer via. This enables the sensors to write their data in parallel directly into memory and at high speed.

The design is compatible in both fabrication and design with today’s CMOS silicon infrastructure.

Shulaker next plans to work with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system.

This work was funded by the Defense Advanced Research Projects Agency, the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

* As a working-prototype demonstration of the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip, they placed more than 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases for detecting signs of disease by sensing particular compounds in a patient’s breath, says Shulaker. By layering sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth in just one device, according to Shulaker. The top layer could be replaced with additional computation or data storage subsystems, or with other forms of input/output, he explains.

Abstract of Three-dimensional integration of nanotechnologies for computing and data storage on a single chip

The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

Multiwall carbon nanotubes (MWCNTs) could safely help repair damaged connections between neurons by serving as supporting scaffolds for growth or as connections between neurons.

That’s the conclusion of an in-vitro (lab) open-access study with cultured neurons (taken from the hippcampus of neonatal rats) by a multi-disciplinary team of scientists in Italy and Spain, published in the journal Nanomedicine: Nanotechnology, Biology, and Medicine.

Facilitate the full growth of neurons and the formation of new synapses. “This growth, however, is not indiscriminate and unlimited since, as we proved, after a few weeks, a physiological balance is attained.”

Do not interfere with the composition of lipids (cholesterol in particular), which make up the cellular membrane in neurons.

Do not interfere in the transmission of signals through synapses.

The researchers also noted that they recently reported (in an open access paper) low tissue reaction when multiwall carbon nanotubes were implanted in vivo (in live animals) to reconnect damaged spinal neurons.

The researchers say they proved that carbon nanotubes “perform excellently in terms of duration, adaptability and mechanical compatibility with tissue” and that “now we know that their interaction with biological material, too, is efficient. Based on this evidence, we are already studying an in vivo application, and preliminary results appear to be quite promising in terms of recovery of lost neurological functions.”

Abstract of Sculpting neurotransmission during synaptic development by 2D nanostructured interfaces

Carbon nanotube-based biomaterials critically contribute to the design of many prosthetic devices, with a particular impact in the development of bioelectronics components for novel neural interfaces. These nanomaterials combine excellent physical and chemical properties with peculiar nanostructured topography, thought to be crucial to their integration with neural tissue as long-term implants. The junction between carbon nanotubes and neural tissue can be particularly worthy of scientific attention and has been reported to significantly impact synapse construction in cultured neuronal networks. In this framework, the interaction of 2D carbon nanotube platforms with biological membranes is of paramount importance. Here we study carbon nanotube ability to interfere with lipid membrane structure and dynamics in cultured hippocampal neurons. While excluding that carbon nanotubes alter the homeostasis of neuronal membrane lipids, in particular cholesterol, we document in aged cultures an unprecedented functional integration between carbon nanotubes and the physiological maturation of the synaptic circuits.

Gentle exercise like tai chi can reduce the risk of inflammation-related diseases like cancer and accelerated aging. (credit: iStock)

Mind-body interventions such as meditation, yoga*, and tai chi can reverse the molecular reactions in our DNA that cause ill-health and depression, according to a study by scientists at the universities of Coventry and Radboud.

When a person is exposed to a stressful event, their sympathetic nervous system (responsible for the “fight-or-flight” response) is triggered, which increases production of a molecule called nuclear factor kappa B (NF-kB). That molecule then activates genes to produce proteins called cytokines that cause inflammation at the cellular level, affecting the body, brain, and immune system.

That’s useful as a short-lived fight-or-flight reaction. However, if persistent, it leads to a higher risk of cancer, accelerated aging, and psychiatric disorders like depression.

But in a paper published June 16, 2017 in the open-access journal Frontiers in Immunology, the researchers reveal findings of 18 studies (featuring 846 participants over 11 years) indicating that people who practice mind-body interventions exhibit the opposite effect. They showed a decrease in production of NF-kB and cytokines — reducing the pro-inflammatory gene expression pattern and the risk of inflammation-related diseases and conditions.

David Gorski, MD, PhD, has published a critique of this study here. (Lead author Ivana Burić has replied in the comments below.)

Lowering risks from sitting

Brisk walks can offset health hazards of sitting (credit: iStock)

In addition to stress effects, increased sitting is known to be associated with an increased risk of cardiovascular disease, diabetes, and death from all causes.

But regular two-minute brisk walks every 30 minutes (in addition to daily 30-minute walks) significantly reduce levels of triglyceride (lipid, or fatty acid) levels that lead to clogged arteries, researchers from New Zealand’s University of Otago report in a paper published June 19, 2017 in the Journal of Clinical Lipidology.**

The lipid levels were measured in response to a meal consumed around 24 hours after starting the activity. High levels of triglycerides are linked to hardening of the arteries and other cardiovascular conditions.

They previously found that brisk walks for two minutes every 30 minutes also lower blood glucose and insulin levels.

* However, yoga causes musculoskeletal pain in more than 10 per cent of practitioners per year, according to recent research at the University of Sydney published in the Journal of Bodywork and Movement Therapies. “We also found that yoga can exacerbate existing pain, with 21 per cent of existing injuries made worse by doing yoga, particularly pre-existing musculoskeletal pain in the upper limbs,” said lead researcher Associate Professor Evangelos Pappas from the University’s Faculty of Health Sciences.

“In terms of severity, more than one-third of cases of pain caused by yoga were serious enough to prevent yoga participation and lasted more than 3 months.” The study found that most “new” yoga pain was in the upper extremities (shoulder, elbow, wrist, hand), possibly due to downward dog and similar postures that put weight on the upper limbs. However, 74 per cent of participants in the study reported that existing pain was actually improved by yoga, highlighting the complex relationship between musculoskeletal pain and yoga practice.

They used observational data from the National Health and Nutrition Examination Survey (NHANES) to examine whether longer durations of low-intensity activities (e.g., standing) vs. light-intensity activities (e.g., casual walking, light gardening, cleaning) extend the lifespan of people who are sedentary for more than half of their waking hours.

They found that adding two minutes of low-intensity activities every hour (plus 2.5 hours of moderate exercise each week, which strengthens the heart, muscles, and bones) was associated with a 33 percent lower risk of dying. “It was fascinating to see the results because the current national focus is on moderate or vigorous activity,” says lead author Srinivasan Beddhu, M.D., professor of internal medicine. “To see that light activity had an association with lower mortality is intriguing.”

UPDATE July 5, 2017 — Added mention of a critique to the Coventry–Radboud study.

By combining machine-learning algorithms with fMRI brain imaging technology, Carnegie Mellon University (CMU) scientists have discovered, in essense, how to “read minds.”

The researchers used functional magnetic resonance imaging (fMRI) to view how the brain encodes various thoughts (based on blood-flow patterns in the brain). They discovered that the mind’s building blocks for constructing complex thoughts are formed, not by words, but by specific combinations of the brain’s various sub-systems.

“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in evening with my friends,’” said CMU’s Marcel Just, the D.O. Hebb University Professor of Psychology in the Dietrich College of Humanities and Social Sciences. “We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of.”

The researchers used 240 specific events (described by sentences such as “The storm destroyed the theater”) in the study, with seven adult participants. They measured the brain’s coding of these events using 42 “neurally plausible semantic features” — such as person, setting, size, social interaction, and physical action (as shown in the word clouds in the illustration above). By measuring the specific activation of each of these 42 features in a person’s brain system, the program could tell what types of thoughts that person was focused on.

The researchers used a computational model to assess how the detected brain activation patterns (shown in the top illustration, for example) for 239 of the event sentences corresponded to the detected neurally plausible semantic features that characterized each sentence. The program was then able to decode the features of the 240th left-out sentence. (For “cross-validation,” they did the same for the other 239 sentences.)

The model was able to predict the features of the left-out sentence with 87 percent accuracy, despite never being exposed to its activation before. It was also able to work in the other direction: to predict the activation pattern of a previously unseen sentence, knowing only its semantic features.

“Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence,” Just explained. “This advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of.”

“A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding,” he added. “We are on the way to making a map of all the types of knowledge in the brain.”

Or if the CMU method could be replaced by noninvasive functional near-infrared spectroscopy (fNIRS), Facebook’s Building8 research concept (proposed by former DARPA head Regina Dugan) might be incorporated (a filter for creating quasi ballistic photons, avoiding diffusion and creating a narrow beam for precise targeting of brain areas, combined with a new method of detecting blood-oxygen levels).

The CMU research is supported by the Office of the Director of National Intelligence (ODNI) via the Intelligence Advanced Research Projects Activity (IARPA) and the Air Force Research Laboratory (AFRL).

CMU has created some of the first cognitive tutors, helped to develop the Jeopardy-winning Watson, founded a groundbreaking doctoral program in neural computation, and is the birthplace of artificial intelligence and cognitive psychology. CMU also launched BrainHub, an initiative that focuses on how the structure and activity of the brain give rise to complex behaviors.

Abstract of Predicting the Brain Activation Pattern Associated With the Propositional Content of a Sentence: Modeling Neural Representations of Events and States

Even though much has recently been learned about the neural representation of individual concepts and categories, neuroimaging research is only beginning to reveal how more complex thoughts, such as event and state descriptions, are neurally represented. We present a predictive computational theory of the neural representations of individual events and states as they are described in 240 sentences. Regression models were trained to determine the mapping between 42 neurally plausible semantic features (NPSFs) and thematic roles of the concepts of a proposition and the fMRI activation patterns of various cortical regions that process different types of information. Given a semantic characterization of the content of a sentence that is new to the model, the model can reliably predict the resulting neural signature, or, given an observed neural signature of a new sentence, the model can predict its semantic content. The models were also reliably generalizable across participants. This computational model provides an account of the brain representation of a complex yet fundamental unit of thought, namely, the conceptual content of a proposition. In addition to characterizing a sentence representation at the level of the semantic and thematic features of its component concepts, factor analysis was used to develop a higher level characterization of a sentence, specifying the general type of event representation that the sentence evokes (e.g., a social interaction versus a change of physical state) and the voxel locations most strongly associated with each of the factors.

Individual neurons firing within a volume of brain tissue (credit: The Rockefeller University)

A team of scientists has peered into a mouse brain with light, capturing live neural activity of hundreds of individual neurons in a 3D section of tissue at video speed (30 Hz) in a single recording for the first time.

Besides serving as a powerful research tool, this discovery means it may now be possible to “alter stimuli in real time based on what we see going on in the animal’s brain,” said Rockefeller University’s Alipasha Vaziri, senior author of an open-access paper published June 26, 2017 in Nature Methods.

By dramatically reducing the time and computational resources required to generate such an image, the algorithm opens the door to more sophisticated experiments, says Vaziri, head of the Rockefeller Laboratory of Neurotechnology and Biophysics. “Our goal is to better understand brain function by monitoring the dynamics within densely interconnected, three-dimensional networks of neurons,” Vaziri explained.

The research “may open the door to a range of applications, including real-time whole-brain recording and closed-loop interrogation of neuronal population activity in combination with optogenetics and behavior,” the paper authors suggest.

Watching mice think in real time

The scientists first engineered the animals’ neurons to fluoresce (glow), using a method called optogenetics. The stronger the neural signal, the brighter the cells shine. To capture this activity, they used a technique known as “light-field microscopy,” in which an array of lenses generates views from a variety of perspectives. These images are then combined to create a three-dimensional rendering, using a new algorithm called “seeded iterative demixing” (SID) developed by the team.

Without the new algorithm, the individual neurons are difficult to distinguish. (credit: The Rockefeller University)

To record the activity of all neurons at the same time, their images have to be captured on a camera simultaneously. In earlier research, this has made it difficult to distinguish the signals emitted by all cells as the light from the mouse’s neurons bounces off the surrounding, opaque tissue. The neurons typically show up as an indistinct, flickering mass.

The SID algorithm now makes it possible to simultaneously capture both the location of the individual neurons and the timing of their signals within a three-dimensional section of brain containing multiple layers of neurons, down to a depth of 0.38 millimeters.* Vaziri and his colleagues were able to track the precise coordinates of hundreds of active neurons over an extended period of time in mice that were awake and had the option of walking on a customized treadmill.

* “SID can capture neuronal dynamics in vivo within a volume of 900 × 900 × 260 μm located as deep as 380 μm in the mouse cortex or hippocampus at a 30-Hz volume rate while discriminating signals from neurons as close as 20 μm apart, at a computational cost three orders of magnitude less than that of frame-by-frame image reconstruction.” – Tobias Nöbauer et al./Nature Methods

UPDATE June 29, 2017 — Added: “The research ‘may open the door to a range of applications, including real-time whole-brain recording and closed-loop interrogation of neuronal population activity in combination with optogenetics and behavior,’ the paper authors suggest.”

Light-field microscopy (LFM) is a scalable approach for volumetric Ca2+ imaging with high volumetric acquisition rates (up to 100 Hz). Although the technology has enabled whole-brain Ca2+ imaging in semi-transparent specimens, tissue scattering has limited its application in the rodent brain. We introduce seeded iterative demixing (SID), a computational source-extraction technique that extends LFM to the mammalian cortex. SID can capture neuronal dynamics in vivo within a volume of 900 × 900 × 260 μm located as deep as 380 μm in the mouse cortex or hippocampus at a 30-Hz volume rate while discriminating signals from neurons as close as 20 μm apart, at a computational cost three orders of magnitude less than that of frame-by-frame image reconstruction. We expect that the simplicity and scalability of LFM, coupled with the performance of SID, will open up a range of applications including closed-loop experiments.

Researchers at the College of Engineering at Carnegie Mellon University (CMU) have developed a new automated feedback system for personalizing exoskeletons to achieve optimal performance.

Exoskeletons can be used to augment human abilities. For example, they can provide more endurance while walking, help lift a heavy load, improve athletic performance, and help a stroke patient walk again.

But current one-size-fits-all exoskeleton devices, despite their potential, “have not improved walking performance as much as we think they should,” said Steven Collins, a professor of Mechanical Engineering and senior author of a paper published published Friday June 23, 2017 in Science.

The problem: An exoskeleton needs to be adjusted (and re-adjusted) to work effectively for each user — currently, a time-consuming, iffy manual process.

So the CMU engineers developed a more effective “human-in-the-loop optimization” technique that measures the amount of energy the walker expends by monitoring their breathing* — automatically adjusting the exoskeleton’s ankle dynamics to minimize required human energy expenditure.**

Using real-time metabolic cost estimation for each individual, the CMU software algorithm, combined with versatile emulator hardware, optimized the exoskeleton torque pattern for one ankle while walking, running, and carrying a load on a treadmill. The algorithm automatically made optimized adjustments for each pattern, based on measurements of a person’s energy use for 32 different walking patterns over the course of an hour. (credit: Juanjuan Zhang et al./Science, adapted by KurzweilAI)

In a lab study with 11 healthy volunteers, the new technique resulted in an average reduction in effort of 24% compared to participants walking with the exoskeleton powered off. The technique yielded higher user benefits than in any exoskeleton study to date, including devices acting at all joints on both legs, according to the researchers.

* “In daily life, a proxy measure such as heart rate or muscle activity could be used for optimization, providing noisier but more abundant performance data.” — Juanjuan Zhang et al./Science

** Ankle torque in the lab study was determined by four parameters: peak torque, timing of peak torque, and rise and fall times. This method was chosen to allow comparisons to a prior study that used the same hardware.

Science/AAAS | Personalized Exoskeletons Are Taking Support One Step Farther

Abstract of Human-in-the-loop optimization of exoskeleton assistance during walking

Exoskeletons and active prostheses promise to enhance human mobility, but few have succeeded. Optimizing device characteristics on the basis of measured human performance could lead to improved designs. We have developed a method for identifying the exoskeleton assistance that minimizes human energy cost during walking. Optimized torque patterns from an exoskeleton worn on one ankle reduced metabolic energy consumption by 24.2 ± 7.4% compared to no torque. The approach was effective with exoskeletons worn on one or both ankles, during a variety of walking conditions, during running, and when optimizing muscle activity. Finding a good generic assistance pattern, customizing it to individual needs, and helping users learn to take advantage of the device all contributed to improved economy. Optimization methods with these features can substantially improve performance.

A GelSight sensor attached to a robot’s gripper enables the robot to determine precisely where it has grasped a small screwdriver, removing it from and inserting it back into a slot, even when the gripper screens the screwdriver from the robot’s camera. (credit: Robot Locomotion Group at MIT)

The “GelSight” sensor consists of a block of transparent soft rubber — the “gel” of its name — with one face coated with metallic paint. It is mounted on one side of a robotic gripper. When the paint-coated face is pressed against an object, the face conforms to the object’s shape and the metallic paint makes the object’s surface reflective. Mounted on the sensor opposite the paint-coated face of the rubber block are three colored lights at different angles and a single camera.

Humans gauge hardness by the degree to which the contact area between the object and our fingers changes as we press on it. Softer objects tend to flatten more, increasing the contact area. The MIT researchers used the same approach.

A GelSight sensor, pressed against each object manually, recorded how the contact pattern changed over time, essentially producing a short movie for each object. A neural network was then used to look for correlations between changes in contact patterns and hardness measurements. The resulting system takes frames of video as inputs and produces hardness scores with very high accuracy.

The researchers also designed control algorithms that use a computer vision system to guide the robot’s gripper toward a tool and then turn location estimation over to a GelSight sensor once the robot has the tool in hand.

“I think that the GelSight technology, as well as other high-bandwidth tactile sensors, will make a big impact in robotics,” says Sergey Levine, an assistant professor of electrical engineering and computer science at the University of California at Berkeley. “For humans, our sense of touch is one of the key enabling factors for our amazing manual dexterity. Current robots lack this type of dexterity and are limited in their ability to react to surface features when manipulating objects. If you imagine fumbling for a light switch in the dark, extracting an object from your pocket, or any of the other numerous things that you can do without even thinking — these all rely on touch sensing.”

The researchers presented their work in two papers at the International Conference on Robotics and Automation.

Wenzhen Yuan | Measuring hardness of fruits with GelSight sensor

Abstract of Tracking Objects with Point Clouds from Vision and Touch

We present an object-tracking framework that fuses point cloud information from an RGB-D camera with tactile information from a GelSight contact sensor. GelSight can be treated as a source of dense local geometric information, which we incorporate directly into a conventional point-cloud-based articulated object tracker based on signed-distance functions. Our implementation runs at 12 Hz using an online depth reconstruction algorithm for GelSight and a modified secondorder update for the tracking algorithm. We present data from hardware experiments demonstrating that the addition of contact-based geometric information significantly improves the pose accuracy during contact, and provides robustness to occlusions of small objects by the robot’s end effector.

Applications could include emergency search-and-rescue, archaeological discovery, and structural monitoring, according to the researchers. Other applications could include military and law-enforcement surveillance.

Calculating 3D images from WiFi signals

In the research, two octo-copters (drones) took off and flew outside an enclosed, four-sided brick structure whose interior was unknown to the drones. One drone continuously transmitted a WiFi signal; the other drone (located on a different side of the structure) received that signal and transmitted the changes in received signal strength (“RSSI”) during the flight to a computer, which then calculated 3D high-resolution images of the objects inside (which do not need to move).

This development builds on previous 2D work by professor Yasamin Mostofi’s lab, which has pioneered sensing and imaging with everyday radio frequency signals such as WiFi. Mostofi says the success of the 3D experiments is due to the drones’ ability to approach the area from several angles, and to new methodology* developed by her lab.

The research is described in an open-access paper published April 2017 in proceedings of the Association for Computing Machinery/Institute of Electrical and Electronics Engineers International Conference on Information Processing in Sensor Networks (IPSN).

A later paper by Technical University of Munich physicists also reported a system intended for 3D imaging with WiFi, but with only simulated (and cruder) images. (An earlier 2009 paperby Mostofi et al. also reported simulated results for 3D see-through imaging of structures.)

* The researchers’ approach to enabling 3D through-wall imaging utilizes four tightly integrated key components, according to the paper.

(1) They proposed robotic paths that can capture the spatial variations in all three dimensions as much as possible, while maintaining the efficiency of the operation.

(2)They modeled the three-dimensional unknown area of interest as a Markov Random Field to capture the spatial dependencies, and utilized a graph-based belief propagation approach to update the imaging decision of each voxel (the smallest unit of a 3D image) based on the decisions of the neighboring voxels.

(3) To approximate the interaction of the transmitted wave with the area of interest, they used a linear wave model.

(4) They took advantage of the compressibility of the information content to image the area with a very small number of WiFi measurements (less than 4 percent).

In this paper, we are interested in the 3D through-wall imaging of a completely unknown area, using WiFi RSSI and Unmanned Aerial Vehicles (UAVs) that move outside of the area of interest to collect WiFi measurements. It is challenging to estimate a volume represented by an extremely high number of voxels with a small number of measurements. Yet many applications are time-critical and/or limited on resources, precluding extensive measurement collection. In this paper, we then propose an approach based on Markov random field modeling, loopy belief propagation, and sparse signal processing for 3D imaging based on wireless power measurements. Furthermore, we show how to design ecient aerial routes that are informative for 3D imaging. Finally, we design and implement a complete experimental testbed and show high-quality 3D robotic through-wall imaging of unknown areas with less than 4% of measurements.

Queen’s University Belfast physicists have discovered a radical new way to modify the conductivity (ease of electron flow) of electronic circuits — reducing the size of future devices.

The two latest KurzweilAI articles on graphene cited faster/lower-power performance and device-compatibility features. This new research takes another approach: Altering the properties of a crystal to eliminate the need for multiple circuits in devices.

Reconfigurable nanocircuitry

To do that, the scientists used “ferroelectric copper-chlorine boracite” crystal sheets, which are almost as thin as graphene. The researchers discovered that squeezing the crystal sheets with a sharp needle at a precise location causes a jigsaw-puzzle-like pattern of “domains walls” to develop around the contact point.

Then, using external applied electric fields, these writable, erasable domain walls can be repeatedly moved around in the crystal to create a variety of new electronic properties. They can appear, disappear, or move around within the crystal, all without permanently altering the crystal itself.

Eliminating the need for multiple circuits may reduce the size of future computers and other devices, according to the researchers.

The team’s findings have been published in an open-access paper in Nature Communications.

Ferroelectric domain walls constitute a completely new class of sheet-like functional material. Moreover, since domain walls are generally writable, erasable and mobile, they could be useful in functionally agile devices: for example, creating and moving conducting walls could make or break electrical connections in new forms of reconfigurable nanocircuitry. However, significant challenges exist: site-specific injection and annihilation of planar walls, which show robust conductivity, has not been easy to achieve. Here, we report the observation, mechanical writing and controlled movement of charged conducting domain walls in the improper-ferroelectric Cu3B7O13Cl. Walls are straight, tens of microns long and exist as a consequence of elastic compatibility conditions between specific domain pairs. We show that site-specific injection of conducting walls of up to hundreds of microns in length can be achieved through locally applied point-stress and, once created, that they can be moved and repositioned using applied electric fields.

Adding a molecular structure containing carbon, chromium, and oxygen atoms retains graphene’s superior conductive properties. The metal atoms (silver, in this experiment) to be bonded are then added to the oxygen atoms on top. (credit: Songwei Che et al./Nano Letters)

University of Illinois at Chicago scientists have solved a fundamental problem that has held back the use of wonder material graphene in a wide variety of electronics applications.

When graphene is bonded (attached) to metal atoms (such as molybdenum) in devices such as solar cells, graphene’s superior conduction properties degrade.

The solution: Instead of adding molecules directly to the individual carbon atoms of graphene, the new method first adds a sort of buffer (consisting of chromium, carbon, and oxygen atoms) to the graphene, and then adds the metal atoms to this buffer material instead. That enables the graphene to retain its unique properties of electrical conduction.

In an experiment, the researchers successfully added silver nanoparticles to graphene with this method. That increased the material’s ability to boost the efficiency of graphene-based solar cells by 11 fold, said Vikas Berry, associate professor and department head of chemical engineering and senior author of a paper on the research, published in Nano Letters.

Researchers at Indian Institute of Technology and Clemson University were also involved in the study. The research was funded by the National Science Foundation.

Binding graphene with auxiliary nanoparticles for plasmonics, photovoltaics, and/or optoelectronics, while retaining the trigonal-planar bonding of sp2 hybridized carbons to maintain its carrier-mobility, has remained a challenge. The conventional nanoparticle-incorporation route for graphene is to create nucleation/attachment sites via “carbon-centered” covalent functionalization, which changes the local hybridization of carbon atoms from trigonal-planar sp2to tetrahedral sp3. This disrupts the lattice planarity of graphene, thus dramatically deteriorating its mobility and innate superior properties. Here, we show large-area, vapor-phase, “ring-centered” hexahapto (η6) functionalization of graphene to create nucleation-sites for silver nanoparticles (AgNPs) without disrupting its sp2 character. This is achieved by the grafting of chromium tricarbonyl [Cr(CO)3] with all six carbon atoms (sigma-bonding) in the benzenoid ring on graphene to form an (η6-graphene)Cr(CO)3 complex. This nondestructive functionalization preserves the lattice continuum with a retention in charge carrier mobility (9% increase at 10 K); with AgNPs attached on graphene/n-Si solar cells, we report an ∼11-fold plasmonic-enhancement in the power conversion efficiency (1.24%).

How a graphene-based transistor would work. A graphene nanoribbon (GNR) is created by unzipping (opening up) a portion of a carbon nanotube (CNT) (the flat area, shown with pink arrows above it). The GRN switching is controlled by two surrounding parallel CNTs. The magnitudes and relative directions of the control current, ICTRL (blue arrows) in the CNTs determine the rotation direction of the magnetic fields, B (green). The magnetic fields then control the GNR magnetization (based on the recent discovery of negative magnetoresistance), which causes the GNR to switch from resistive (no current) to conductive, resulting in current flow, IGNR (pink arrows) — in other words, causing the GNR to act as a transistor gate. The magnitude of the current flow through the GNR functions as the binary gate output — with binary 1 representing the current flow of the conductive state and binary 0 representing no current (the resistive state). (credit: Joseph S. Friedman et al./Nature Communications)

A future graphene-based transistor using spintronics could lead to tinier computers that are a thousand times faster and use a hundredth of the power of silicon-based computers.

The radical transistor concept, created by a team of researchers at Northwestern University, The University of Texas at Dallas, University of Illinois at Urbana-Champaign, and University of Central Florida, is explained this month in an open-access paper in the journal Nature Communications.

Transistors act as on and off switches. A series of transistors in different arrangements act as logic gates, allowing microprocessors to solve complex arithmetic and logic problems. But the speed of computer microprocessors that rely on silicon transistors has been relatively stagnant since around 2005, with clock speeds mostly in the 3 to 4 gigahertz range.

Clock speeds approaching the terahertz range

The researchers discovered that by applying a magnetic field to a graphene ribbon (created by unzipping a carbon nanotube), they could change the resistance of current flowing through the ribbon. The magnetic field — controlled by increasing or decreasing the current through adjacent carbon nanotubes — increased or decreased the flow of current.

A cascading series of graphene transistor-based logic circuits could produce a massive jump, with clock speeds approaching the terahertz range — a thousand times faster.* They would also be smaller and substantially more efficient, allowing device-makers to shrink technology and squeeze in more functionality, according to Ryan M. Gelfand, an assistant professor in The College of Optics & Photonics at the University of Central Florida.

The researchers hope to inspire the fabrication of these cascaded logic circuits to stimulate a future transformative generation of energy-efficient computing.

* Unlike other spintronic logic proposals, these new logic gates can be cascaded directly through the carbon materials without requiring intermediate circuits and amplification between gates. That would result in compact circuits with reduced area that are far more efficient than with CMOS switching, which is limited by charge transfer and accumulation from RLC (resistance-inductance-capacitance) interconnect delays.

Abstract of Cascaded spintronic logic with low-dimensional carbon

Remarkable breakthroughs have established the functionality of graphene and carbon nanotube transistors as replacements to silicon in conventional computing structures, and numerous spintronic logic gates have been presented. However, an efficient cascaded logic structure that exploits electron spin has not yet been demonstrated. In this work, we introduce and analyse a cascaded spintronic computing system composed solely of low-dimensional carbon materials. We propose a spintronic switch based on the recent discovery of negative magnetoresistance in graphene nanoribbons, and demonstrate its feasibility through tight-binding calculations of the band structure. Covalently connected carbon nanotubes create magnetic fields through graphene nanoribbons, cascading logic gates through incoherent spintronic switching. The exceptional material properties of carbon materials permit Terahertz operation and two orders of magnitude decrease in power-delay product compared to cutting-edge microprocessors. We hope to inspire the fabrication of these cascaded logic circuits to stimulate a transformative generation of energy-efficient computing.

A team of researchers at MIT and elsewhere has developed a new approach to deep learning systems — using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep-learning computations.

Deep-learning systems are based on artificial neural networks that mimic the way the brain learns from an accumulation of examples. They can enable technologies such as face- and voice-recognition software, or scour vast amounts of medical data to find patterns that could be useful diagnostically, for example.

But the computations these systems carry out are highly complex and demanding, even for supercomputers. Traditional computer architectures are not very efficient for calculations needed for neural-network tasks that involve repeated multiplications of matrices (arrays of numbers). These can be computationally intensive for conventional CPUs or even GPUs.

Programmable nanophotonic processor

Instead, the new approach uses an optical device that the researchers call a “programmable nanophotonic processor.” Multiple light beams are directed in such a way that their waves interact with each other, producing interference patterns that “compute” the intended operation.

The optical chips using this architecture could, in principle, carry out dense matrix multiplications (the most power-hungry and time-consuming part in AI algorithms) for learning tasks much faster, compared to conventional electronic chips. The researchers expect a computational speed enhancement of at least two orders of magnitude over the state-of-the-art and three orders of magnitude in power efficiency.

“This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” says Marin Soljacic, one of the MIT researchers on the team.

To demonstrate the concept, the team set the programmable nanophotonic processor to implement a neural network that recognizes four basic vowel sounds. Even with the prototype system, they were able to achieve a 77 percent accuracy level, compared to about 90 percent for conventional systems. There are “no substantial obstacles” to scaling up the system for greater accuracy, according to Soljacic.

The team says is will still take a lot more time and effort to make this system useful. However, once the system is scaled up and fully functioning, the low-power system should find many uses, especially for situations where power is limited, such as in self-driving cars, drones, and mobile consumer devices. Other uses include signal processing for data transmission and computer centers.

The research was published Monday (June 12, 2017) in a paper in the journal Nature Photonics (open-access version available on arXiv).

The team also included researchers at Elenion Technologies of New York and the Université de Sherbrooke in Quebec. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the National Science Foundation, and the Air Force Office of Scientific Research.

Abstract of Deep learning with coherent nanophotonic circuits

Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today’s computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach–Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition.

MIT researchers and associates have come up with a breakthrough method of remotely stimulating regions deep within the brain, replacing the invasive surgery now required for implanting electrodes for Parkinson’s and other brain disorders.

The new method could make deep-brain stimulation for brain disorders less expensive, more accessible to patients, and less risky (avoiding brain hemorrhage and infection).

Working with mice, the researchers applied two high-frequency electrical currents at two slightly different frequencies (E1 and E2 in the diagram below), attaching electrodes (similar those used with an EEG brain machine) to the surface of the skull.

At these higher brain frequencies, the currents have no effect on brain tissues. But where the currents converge deep in the brain, they interfere with one another in such a way that they generate low-frequency current (corresponding to the red envelope in the diagram) inside neurons, thus stimulating neural electrical activity.

The researchers named this method “temporal interference stimulation” (that is, interference between pulses in the two currents at two slightly different times — generating the difference frequency).* For the experimental setup shown in the diagram above, the E1 current was 1kHz (1,000 Hz), which mixed with a 1.04kHz E2 current. That generated a current with a 40Hz “delta f” difference frequency — a frequency that can stimulate neural activity in the brain. (The researchers found no harmful effects in any part of the mouse brain.)

“Traditional deep-brain stimulation requires opening the skull and implanting an electrode, which can have complications,” explains Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and the senior author of the study, which appears in the June 1, 2017 issue of the journal Cell. Also, “only a small number of people can do this kind of neurosurgery.”

Custom-designed, targeted deep-brain stimulation

If this new method is perfected and clinically tested, neurologists could control the size and location of the exact tissue that receives the electrical stimulation for each patient, by selecting the frequency of the currents and the number and location of the electrodes, according to the researchers.

Neurologists could also steer the location of deep-brain stimulation in real time, without moving the electrodes, by simply altering the currents. In this way, deep targets could be stimulated for conditions such as Parkinson’s, epilepsy, depression, and obsessive-compulsive disorder — without affecting surrounding brain structures.

The researchers are also exploring the possibility of using this method to experimentally treat other brain conditions, such as autism, and for basic science investigations.

Co-author Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and researchers in her lab tested this technique in mice and found that they could stimulate small regions deep within the brain, including the hippocampus. But they were also able to shift the site of stimulation, allowing them to activate different parts of the motor cortex and prompt the mice to move their limbs, ears, or whiskers.

“We showed that we can very precisely target a brain region to elicit not just neuronal activation but behavioral responses,” says Tsai.

Last year, Tsai showed (open access) that using light to visually induce brain waves of a particular frequency could substantially reduce the beta amyloid plaques seen in Alzheimer’s disease, in the brains of mice. She now plans to explore whether this new type of electrical stimulation could offer a new way to generate the same type of beneficial brain waves.

This new method is also an alternative to other brain-stimulation methods.

Transcranial magnetic stimulation (TMS), which is FDA-approved for treating depression and to study the basic science of cognition, emotion, sensation, and movement, can stimulate deep brain structures but can result in surface regions being strongly stimulated, according to the researchers.

Transcranial ultrasound and expression of heat-sensitive receptors and injection of thermomagnetic nanoparticles have been proposed, “but the unknown mechanism of action … and the need to genetically manipulate the brain, respectively, may limit their immediate use in humans,” the researchers note in the paper.

The research was funded in part by the Wellcome Trust, a National Institutes of Health Director’s Pioneer Award, an NIH Director’s Transformative Research Award, the New York Stem Cell Foundation Robertson Investigator Award, the MIT Center for Brains, Minds, and Machines, Jeremy and Joyce Wertheimer, Google, a National Science Foundation Career Award, the MIT Synthetic Intelligence Project, and Harvard Catalyst: The Harvard Clinical and Translational Science Center.

We report a noninvasive strategy for electrically stimulating neurons at depth. By delivering to the brain multiple electric fields at frequencies too high to recruit neural firing, but which differ by a frequency within the dynamic range of neural firing, we can electrically stimulate neurons throughout a region where interference between the multiple fields results in a prominent electric field envelope modulated at the difference frequency. We validated this temporal interference (TI) concept via modeling and physics experiments, and verified that neurons in the living mouse brain could follow the electric field envelope. We demonstrate the utility of TI stimulation by stimulating neurons in the hippocampus of living mice without recruiting neurons of the overlying cortex. Finally, we show that by altering the currents delivered to a set of immobile electrodes, we can steerably evoke different motor patterns in living mice.

This figure shows eight different real faces that were presented to a monkey, together with reconstructions made by analyzing electrical activity from 205 neurons recorded while the monkey was viewing the faces. (credit: Doris Tsao)

In a paper published (open access) June 1 in the journal Cell, researchers report that they have cracked the code for facial identity in the primate brain.

“We’ve discovered that this code is extremely simple,” says senior author Doris Tsao, a professor of biology and biological engineering at the California Institute of Technology and senior author. “We can now reconstruct a face that a monkey is seeing by monitoring the electrical activity of only 205 neurons in the monkey’s brain. One can imagine applications in forensics where one could reconstruct the face of a criminal by analyzing a witness’s brain activity.”

The researchers previously identified the six “face patches” — general areas of the primate and human brain that are responsible for identifying faces — all located in the inferior temporal (IT) cortex. They also found that these areas are packed with specific nerve cells that fire action potentials much more strongly when seeing faces than when seeing other objects. They called these neurons “face cells.”

Previously, some experts in the field believed that each face cell (a.k.a. “grandmother cell“) in the brain represents a specific face, but this presented a paradox, says Tsao, who is also a Howard Hughes Medical Institute investigator. “You could potentially recognize 6 billion people, but you don’t have 6 billion face cells in the IT cortex. There had to be some other solution.”

Instead, they found that rather than representing a specific identity, each face cell represents a specific axis within a multidimensional space, which they call the “face space.” These axes can combine in different ways to create every possible face. In other words, there is no “Jennifer Aniston” neuron.

The clinching piece of evidence: the researchers could create a large set of faces that looked extremely different, but which all caused the cell to fire in exactly the same way. “This was completely shocking to us — we had always thought face cells were more complex. But it turns out each face cell is just measuring distance along a single axis of face space, and is blind to other features,” Tsao says.

AI applications

“The way the brain processes this kind of information doesn’t have to be a black box,” Chang explains. “Although there are many steps of computations between the image we see and the responses of face cells, the code of these face cells turned out to be quite simple once we found the proper axes. This work suggests that other objects could be encoded with similarly simple coordinate systems.”

The research also has artificial intelligence applications. “This could inspire new machine learning algorithms for recognizing faces,” Tsao adds. “In addition, our approach could be used to figure out how units in deep networks encode other things, such as objects and sentences.”

This research was supported by the National Institutes of Health, the Howard Hughes Medical Institute, the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech, and the Swartz Foundation.

* The researchers started by creating a 50-dimensional space that could represent all faces. They assigned 25 dimensions to the shape–such as the distance between eyes or the width of the hairline–and 25 dimensions to nonshape-related appearance features, such as skin tone and texture.

Using macaque monkeys as a model system, the researchers inserted electrodes into the brains that could record individual signals from single face cells within the face patches. They found that each face cell fired in proportion to the projection of a face onto a single axis in the 50-dimensional face space. Knowing these axes, the researchers then developed an algorithm that could decode additional faces from neural responses.

In other words, they could now show the monkey an arbitrary new face, and recreate the face that the monkey was seeing from electrical activity of face cells in the animal’s brain. When placed side by side, the photos that the monkeys were shown and the faces that were recreated using the algorithm were nearly identical. Face cells from only two of the face patches–106 cells in one patch and 99 cells in another–were enough to reconstruct the faces. “People always say a picture is worth a thousand words,” Tsao says. “But I like to say that a picture of a face is worth about 200 neurons.”

Caltech | Researchers decipher the enigma of how faces are encoded

Abstract of The Code for Facial Identity in the Primate Brain

Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.

A study by neuroscientists at Toronto-based Baycrest Rotman Research Institute and Stanford University involving playing a musical instrument suggests ways to improve brain rehabilitation methods.

In the study, published in the Journal of Neuroscience on May 24, 2017, the researchers asked young adults to listen to sounds from an unfamiliar musical instrument (a Tibetan singing bowl). Half of the subjects (the experimental group) were then asked to recreate the same sounds and rhythm by striking the bowl; the other half (the control group) were instead asked to recreate the sound by simply pressing a key on a computer keypad.

After listening to the sounds they created, subjects in the experimental group showed increased auditory-evoked P2 (P200) brain waves. This was significant because the P2 increase “occurred immediately, while in previous learning-by-listening studies, P2 increases occurred on a later day,” the researchers explained in the paper. The experimental group also had increased responsiveness of brain beta-wave oscillations and enhanced connectivity between auditory and sensorimotor cortices (areas) in the brain.

The brain changes were measured using magnetoencephalographic (MEG) recording, which is similar to EEG, but uses highly sensitive magnetic sensors.

Immediate beneficial effects on the brain

“The results … provide a neurophysiological basis for the application of music making in motor rehabilitation [increasing the ability to move arms and legs] training,” the authors state in the paper. The findings support Ross’ research in using musical training to help stroke survivors rehabilitate motor movement in their upper bodies. Baycrest scientists also have a history of breakthroughs in understanding how a person’s musical background impacts their listening abilities and cognitive function as they age.

“This study was the first time we saw direct changes in the brain after one session, demonstrating that the action of creating music leads to a strong change in brain activity,” said Bernhard Ross, PhD., senior scientist at Rotman Research Institute and senior author on the study.

“Music has been known to have beneficial effects on the brain, but there has been limited understanding into what about music makes a difference,” he added. “This is the first study demonstrating that learning the fine movement needed to reproduce a sound on an instrument changes the brain’s perception of sound in a way that is not seen when listening to music.”

The study’s next steps involve analyzing recovery by stroke patients with musical training compared to physiotherapy, and the impact of musical training on the brains of older adults. With additional funding, the study could explore developing musical training rehabilitation programs for other conditions that impact motor function, such as traumatic brain injury, and lead to hearing aids of the future, the researchers say.

Abstract of Sound-making actions lead to immediate plastic changes of neuromagnetic evoked responses and induced beta-band oscillations during perception

Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as vocalization or playing a musical instrument. Moreover, neural oscillations at beta-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (seven female, twelve male) participated in three magnetoencephalography (MEG) recordings while first passively listening to recorded sounds of a bell ringing, then actively playing the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared to the initial naïve listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of beta-band oscillations as well as theta coherence between auditory and sensorimotor cortices was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a keypress. We propose that P2 characterizes familiarity with sound objects, whereas beta-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning.

Image of a group of killer T cells (green and red) surrounding a cancer cell (blue, center) (credit: NIH)

Chinese doctors have reported success with a new type of immunotherapy for multiple myeloma*, a blood cancer: 33 out of 35 patients in a clinical trial had clinical remission within two months.

The researchers used a type of T cell called “chimeric antigen receptor (CAR) T.”** In a phase I clinical trial in China, the patient’s own T cells were collected, genetically reprogrammed in a lab, and injected back into the patient. The reprogramming involved inserting an artificially designed gene into the T-cell genome, which helped the genetically reprogrammed cells find and destroy cancer cells throughout the body.

“Although recent advances in chemotherapy have prolonged life expectancy in multiple myeloma, this cancer remains incurable,” said study author Wanhong Zhao, MD, PhD, an associate director of hematology at The Second Affiliated Hospital of Xi’an Jiaotong University in Xi’an, China. “It appears that with this novel immunotherapy there may be a chance for cure in multiple myeloma, but we will need to follow patients much longer to confirm that.”***

U.S. clinical trial planned

“While it’s still early, these data are a strong sign that CAR T-cell therapy can send multiple myeloma into remission,” said ASCO expert Michael S. Sabel, MD, FACS. “It’s rare to see such high response rates, especially for a hard-to-treat cancer. This serves as proof that immunotherapy and precision medicine research pays off. We hope that future research builds on this success in multiple myeloma and other cancers.”

The researchers plan to enroll a total of 100 patients in this continuing clinical trial at four participating hospitals in China. “In early 2018 we also plan to launch a similar clinical trial in the United States. Looking ahead, we would also like to explore whether BCMA CAR T-cell therapy benefits patients who are newly diagnosed with multiple myeloma,” said Zhao.

This study was funded by Legend Biotech Co.

* Multiple myeloma is a cancer of plasma cells, which make antibodies to fight infections. Abnormal plasma cells can crowd out or suppress the growth of other cells in the bone marrow. This suppression may result in anemia, excessive bleeding, and a decreased ability to fight infection. Multiple myeloma is a relatively uncommon cancer. This year, an estimated 30,300 people [Ref. 2] in the United States will be diagnosed with multiple myeloma, and 114,250 [Ref. 3] were diagnosed with this cancer worldwide in 2012. In the United States, only about half of patients survive five years after being diagnosed with multiple myeloma. — American Society of Clinical Oncology

** Over the past few years, CAR T-cell therapy targeting a B-cell biomarker called CD19 proved very effective in initial trials for acute lymphoblastic leukemia (ALL) and some types of lymphoma, but until now, there has been little success with CAR T-cell therapies targeting other biomarkers in other types of cancer. This is one of the first clinical trials of CAR T cells targeting BCMA, which was discovered to play a role in progression of multiple myeloma in 2004. —American Society of Clinical Oncology

*** To date, 19 patients have been followed for more than four months, a pre-set time for full efficacy assessment by the International Myeloma Working Group (IMWG) consensus. Of the 19 patients, 14 have reached stringent complete response (sCR) criteria, one patient has reached partial response, and four patients have achieved very good partial remission (VgPR) criteria in efficacy. There has been only a single case of disease progression from VgPR; an extramedullary lesion of the VgPR patient reappeared three months after disappearing on CT scans. There has not been a single case of relapse among patients who reached sCR criteria. The five patients who have been followed for over a year (12–14 months) all remain in sCR status and are free of minimal residual disease as well (have no detectable cancer cells in the bone marrow). Cytokine release syndrome or CRS, a common and potentially dangerous side effect of CAR T-cell therapy, occurred in 85% of patients, but it was only transient. In the majority of patients symptoms were mild and manageable. CRS is associated with symptoms such as fever, low blood pressure, difficulty breathing, and problems with multiple organs. Only two patients on this study experienced severe CRS (grade 3) but recovered upon receiving tocilizumab (Actemra, an inflammation-reducing treatment commonly used to manage CRS in clinical trials of CAR T-cell therapy). No patients experienced neurologic side effects, another common and serious complication from CAR T-cell therapy. —American Society of Clinical Oncology

Background: Chimeric antigen receptor engineered T cell (CAR-T) is a novel immunotherapeutic approach for cancer treatment and has been clinically validated in the treatment of acute lymphoblastic leukemia (ALL). Here we report an encouraging breakthrough of treating multiple myeloma (MM) using a CAR-T designated LCAR-B38M CAR-T, which targets principally BCMA. Methods: A single arm clinical trial was conducted to assess safety and efficacy of this approach. A total of 19 patients with refractory/relapsed multiple myeloma were included in the trial. The median number of infused cells was 4.7 (0.6 ~ 7.0) × 10e6/ kg. The median follow-up times was 208 (62 ~ 321) days. Results: Among the 19 patients who completed the infusion, 7 patients were monitored for a period of more than 6 months. Six out of the 7 achieved complete remission (CR) and minimal residual disease (MRD)-negative status. The 12 patients who were followed up for less than 6 months met near CR criteria of modified EBMT criteria for various degrees of positive immunofixation. All these effects were observed with a progressive decrease of M-protein and thus expected to eventually meet CR criteria. In the most recent follow-up examination, all 18 survived patients were determined to be free of myeloma-related biochemical and hematologic abnormalities. One of the most common adverse event of CAR-T therapy is acute cytokine release syndrome (CRS). This was observed in 14 (74%) patients who received treatment. Among these 14 patients there were 9 cases of grade 1, 2 cases of grade 2, 1 case of grade 3, and 1 case of grade 4 patient who recovered after treatments. Conclusions: A 100% objective response rate (ORR) to LCAR-B38M CAR-T cells was observed in refractory/relapsed myeloma patients. 18 out of 19 (95%) patients reached CR or near CR status without a single event of relapse in a median follow-up of 6 months. The majority (14) of the patients experienced mild or manageable CRS, and the rest (5) were even free of diagnosable CRS. Based on the encouraging safety and efficacy outcomes, we believe that our LCAR-B38M CAR-T cell therapy is an innovative and highly effective treatment for multiple myeloma.

The current process of creating new robotic systems is challenging, time-consuming, and resource-intensive. So the CMU researchers have created a visual design tool with a simple drag-and-drop interface that lets you choose from a library of standard building blocks (such as actuators and mounting brackets that are either off-the-shelf/mass-produced or can be 3D-printed) that you can combine to create complex functioning robotic systems.

(a) The design interface consists of two workspaces. The left workspace allows for designing the robot. It displays a list of various modules at the top. The leftmost menu provides various functions that allow users to define preferences for the search process visualization and for physical simulation. The right workspace (showing the robot design on a plane) runs a physics simulation of the robot for testing. (b) When you select a new module from the modules list, the system automatically makes visual suggestions (shown in red) about possible connections for this module that are relevant to the current design. (credit: Carnegie Mellon University)

An iterative design process lets you experiment by changing the number and location of actuators and adjusting the physical dimensions of your robot. An auto-completion feature can automatically generate assemblies of components by searching through possible component arrangements. It even suggests components that are compatible with each other, points out where actuators should go, and automatically generates 3D-printable structural components to connect those actuators.

Automated design process. (a) Start with a guiding mesh for the robot you want to make and select the orientations of its motors, using the drag and drop interface. (b) The system then searches for possible designs that connect a given pair of motors in user-defined locations, according to user-defined preferences. You can reject the solution and re-do the search with different preferences anytime. A proposed search solution connecting the root motor to the target motor (highlighted in dark red) is shown in light blue. Repeat this process for each pair of motors. (c) Since the legs are symmetric in this case, you would only need to use the search process for two legs. The interface lets you create the other pair of legs by simple editing operations. Finally, attach end-effectors of your choice and create a body plate to complete your awesome robot design. (d) shows the final design (with and without the guiding mesh). The dinosaur head mesh was manually added after this particular design, for aesthetic appeal. (credit: Carnegie Mellon University)

The research team, headed by Stelian Coros, CMU Robotics Institute assistant professor of robotics, designed a number of robots with the tool and verified its feasibility by fabricating two test robots (shown above) — a wheeled robot with a manipulator arm that can hold a pen for drawing, and a four-legged “puppy” robot that can walk forward or sideways. “Our work aims to make robotics more accessible to casual users,” says Coros.

Abstract of Computational Abstractions for Interactive Design of Robotic Devices

We present a computational design system that allows novices and experts alike to easily create custom robotic devices using modular electromechanical components. The core of our work consists of a design abstraction that models the way in which these components can be combined to form complex robotic systems. We use this abstraction to develop a visual design environment that enables an intuitive exploration of the space of robots that can be created using a given set of actuators, mounting brackets and 3d-printable components. Our computational system also provides support for design auto-completion operations, which further simplifies the task of creating robotic devices. Once robot designs are finished, they can be tested in physical simulations and iteratively improved until they meet the individual needs of their users. We demonstrate the versatility of our computational design system by creating an assortment of legged and wheeled robotic devices. To test the physical feasibility of our designs, we fabricate a wheeled device equipped with a 5-DOF arm and a quadrupedal robot.

Fun with food: These pasta shapes were generated by immersing a 2D flat gelatin film into water. (credit: Michael Indresano Photography)

Researchers at MIT’s Tangible Media Group are exploring ways to make your dining experience interactive and fun, with food that can transform its shape by just adding water.

Think of it as edible origami or culinary performance art — flat sheets of gelatin and starch that instantly sprout into three-dimensional structures, such as macaroni and rotini, or the shape of a flower.

But the researchers suggest it’s also a practical way to reduce food-shipping costs. Edible films could be stacked together, IKEA-style, and shipped to consumers, then morph into their final shape later when immersed in water.

“We did some simple calculations, such as for macaroni pasta, and even if you pack it perfectly, you still will end up with 67 percent of the volume as air,” says Wen Wang, a co-author on the paper and a former graduate student and research scientist in MIT’s Media Lab. “We thought maybe in the future our shape-changing food could be packed flat and save space.”

Programmable pasta, anyone?

At MIT, Wang and associates had been investigating the response of various materials to moisture. They started playing around with gelatin (as in Jello), a substance that naturally expands when it absorbs water. Gelatin can expand to varying degrees depending on its density — a characteristic that the team exploited in creating their shape-transforming structures.

They created a flat, two-layer film made from gelatin of two different densities. In theory, the top layer was more densely packed, so it should be able to absorb more water than the bottom layer. Sure enough, when they immersed the entire structure in water, the top layer curled over the bottom layer, forming a slowly rising arch — creative pasta.*

To see how their designs might be implemented in a professional kitchen, the researchers showed their engineered edibles to Matthew Delisle, the head chef of high-end Boston restaurant L’Espalier. They jointly designed two culinary creations: transparent discs of gelatin flavored with plankton and squid ink, that instantly wrap around small beads of caviar; and long fettuccini-like strips, made from two gelatins that melt at different temperatures, causing the noodles to spontaneously divide when hot broth melts away certain sections. “They had great texture and tasted pretty good,” Yao says.

They envision that their “online software can provide design instructions, and a startup company can ship the materials to your home,” Yao says.

This research was funded, in part, by the MIT Media Lab and Food + Future, a startup accelerator sponsored by Target Corporation, IDEO, and Intel.

* The team recorded the cellulose patterns and the dimensions of all of the structures they were able to produce, and also tested mechanical properties such as toughness, organizing all this data into a database. Co-authors Zhang and Cheng then built computational models of the material’s transformations, which they used to design an online interface for users to design their own edible, shape-transforming structures.“We did many lab tests and collected a database, within which you can pick different shapes, with fabrication instructions,” Wang says. “Reversibly, you can also select a basic pattern from the database and adjust the distribution or thickness, and can see how the final transformation will look.”

Tangible Media Group | Transformative Appetite

Abstract of Transformative Appetite: Shape-Changing Food Transforms from 2D to 3D by Water Interaction through Cooking

We developed a concept of transformative appetite, where edible 2D films made of common food materials (protein, cellulose or starch) can transform into 3D food during cooking. This transformation process is triggered by water adsorption, and it is strongly compatible with the ‘flat packaging’ concept for substantially reducing shipping costs and storage space. To develop these transformable foods, we performed material-based design, established a hybrid fabrication strategy, and conducted performance simulation. Users can customize food shape transformations through a pre-defined simulation platform, and then fabricate these designed patterns using additive manufacturing. Three application techniques are provided – 2D-to-3D folding, hydration-induced wrapping, and temperature-induced self-fragmentation, to present the shape, texture, and interaction with food materials. Based on this concept, several dishes were created in the kitchen, to demonstrate the futuristic dining experience through materials-based interaction design.

Engineers at UC San Diego have designed a light, flexible glove with soft robotic muscles that provide realistic tactile feedback for virtual reality (VR) experiences.

Currently, VR tactile-feedback user interfaces are bulky, uncomfortable to wear and clumsy, and they simply vibrate when a user touches a virtual surface or object.

“This is a first prototype, but it is surprisingly effective,” said Michael Tolley, a mechanical engineering professor at the Jacobs School of Engineering at UC San Diego and a senior author of a paper presented at the Electronic Imaging, Engineering Reality for Virtual Reality conference in Burlingame, California and published May 31, 2017 in Advanced Engineering Materials.

The key soft-robotic component of the new glove is a version of the “McKibben muscle” (a pneumatic, or air-based, actuator invented in 1950s by the physician Joseph L. McKibben for use in prosthetic limbs), using soft latex chambers covered with braided fibers. To apply tactile feedback when the user moves their fingers, the muscles respond like springs. The board controls the muscles by inflating and deflating them.*

Prototype haptic VR glove system. A computer generates an image of a virtual world (in this case, a piano keyboard with a river and trees in the background) that it sends to the VR device (such as an Oculus Rift). A Leap Motion depth-camera (on the table) detects the position and movement of the user’s hands and sends data to a computer. It sends an image of the user’s hands superimposed over the keyboard (in the demo case) to the VR display and to a custom fluidic control board. The board then feeds back a signal to soft robotic components in the glove to individually inflate or deflate fingers, mimicking the user’s applied forces.

The engineers conducted an informal pilot study of 15 users, including two VR interface experts. The demo allowed them to play the piano in VR. They all agreed that the gloves increased the immersive experience, which they described as “mesmerizing” and “amazing.”

The engineers say they’re working on making the glove cheaper, less bulky, and more portable. They would also like to bypass the Leap Motion device altogether to make the system more self-contained and compact. “Our final goal is to create a device that provides a richer experience in VR,” Tolley said. “But you could imagine it being used for surgery and video games, among other applications.”

* The researchers 3D-printed a mold to make the gloves’ soft exoskeleton. This will make the devices easier to manufacture and suitable for mass production, they said. Researchers used silicone rubber for the exoskeleton, with Velcro straps embedded at the joints.

The emerging field of soft robotics makes use of many classes of materials including metals, low glass transition temperature (Tg) plastics, and high Tg elastomers. Dependent on the specific design, all of these materials may result in extrinsically soft robots. Organic elastomers, however, have elastic moduli ranging from tens of megapascals down to kilopascals; robots composed of such materials are intrinsically soft − they are always compliant independent of their shape. This class of soft machines has been used to reduce control complexity and manufacturing cost of robots, while enabling sophisticated and novel functionalities often in direct contact with humans. This review focuses on a particular type of intrinsically soft, elastomeric robot − those powered via fluidic pressurization.

These cross-section images show three-dimensional human skin models made of living skin cells. Untreated model skin (left panel) shows a thinner dermis layer (black arrow) compared with model skin treated with the antioxidant methylene blue (right panel). A new study suggests that methylene blue could slow or reverse dermal thinning (a sign of aging) and a number of other symptoms of aging in human skin. (credit: Zheng-Mei Xiong/University of Maryland)

University of Maryland (UMD) researchers have found evidence that a common, inexpensive, and safe antioxidant chemical called methylene blue could slow the aging of human skin, based on tests in cultured human skin cells and simulated skin tissue.

“The effects we are seeing are not temporary. Methylene blue appears to make fundamental, long-term changes to skin cells,” said Kan Cao, senior author on the study and an associate professor of cell biology and molecular genetics at UMD.

The researchers tested methylene blue for four weeks in skin cells from healthy middle-aged donors, as well as those diagnosed with progeria — a rare genetic disease that mimics the normal aging process at an accelerated rate. The researchers also tested three other known antioxidants: N-Acetyl-L-Cysteine (NAC), MitoQ and MitoTEMPO (mTEM).

In these experiments, methylene blue outperformed the other three antioxidants, improving several age-related symptoms in cells from both healthy donors and progeria patients. The skin cells (fibroblasts, the cells that produce the structural protein collagen) experienced a decrease in damaging molecules known as reactive oxygen species (ROS), a reduced rate of cell death, and an increase in the rate of cell division throughout the four-week treatment.

Improvements in skin cells from older donors (>80 years old)

Next, Cao and her colleagues tested methylene blue in fibroblasts from older donors (>80 years old), again for a period of four weeks. At the end of the treatment, the cells from older donors had experienced a range of improvements, including decreased expression of two genes commonly used as indicators of cellular aging: senescence-associated beta-galactosidase and p16.

Schematic illustrations of top (left panel) and side (right panel) views of the engineered 3D skin tissue cultured on a microporous membrane insert, used for experiments and skin-irritation tests (credit: Zheng-Mei Xiong et al./Scientific Reports)

The researchers then used simulated human skin to perform several more experiments. This simulated skin — a three-dimensional model made of living skin cells — includes all the major layers and structures of skin tissue, with the exception of hair follicles and sweat glands. The model skin could also be used in skin irritation tests required by the Food and Drug Administration for the approval of new cosmetic products, Cao said.

“This system allowed us to test a range of aging symptoms that we can’t replicate in cultured cells alone,” Cao said. “Most surprisingly, we saw that model skin treated with methylene blue retained more water and increased in thickness—both of which are features typical of younger skin.”

Formulating cosmetics

The researchers also used the model skin to test the safety of cosmetic creams with methylene blue added. The results suggest that methylene blue causes little to no irritation, even at high concentrations. Encouraged by these results, Cao and colleagues hope to develop safe and effective ways for consumers to benefit from the properties of methylene blue.

“We have already begun formulating cosmetics that contain methylene blue. Now we are looking to translate this into marketable products,” Cao said. “Perhaps down the road we can customize the system with bioprinting, such that we might be able to use a patient’s own cells to provide a tailor-made testing platform specific to their needs.”

Oxidative stress is the major cause of skin aging that includes wrinkles, pigmentation, and weakened wound healing ability. Application of antioxidants in skin care is well accepted as an effective approach to delay the skin aging process. Methylene blue (MB), a traditional mitochondrial-targeting antioxidant, showed a potent ROS scavenging efficacy in cultured human skin fibroblasts derived from healthy donors and from patients with progeria, a genetic premature aging disease. In comparison with other widely used general and mitochondrial-targeting antioxidants, we found that MB was more effective in stimulating skin fibroblast proliferation and delaying cellular senescence. The skin irritation test, performed on an in vitro reconstructed 3D human skin model, indicated that MB was safe for long-term use, and did not cause irritation even at high concentrations. Application of MB to this 3D skin model further demonstrated that MB improved skin viability, promoted wound healing and increased skin hydration and dermis thickness. Gene expression analysis showed that MB treatment altered the expression of a subset of extracellular matrix proteins in the skin, including upregulation of elastin and collagen 2A1, two essential components for healthy skin. Altogether, our study suggests that MB has a great potential for skin care.

Scientists at The Scripps Research Institute (TSRI) have discovered a way to structurally modify the antibiotic called vancomycin to make an already-powerful version of the antibiotic even more potent — an advance that could eliminate the threat of antibiotic-resistant infections for years to come.

“Doctors could use this modified form of vancomycin without fear of resistance emerging,” said Dale Boger, co-chair of TSRI’s Department of Chemistry, whose team announced the finding Monday (May 29, 2016) in the journal Proceedings of the National Academy of Sciences.

“The death of a hospitalized patient in Reno Nevada for whom no available antibiotics worked highlights what World Health Organization and other public-health experts have been warning: antibiotic resistance is a serious threat and has gone global,” KurzweilAIreported in January 2017. The new finding promises to lead to a solution.

First antibiotic to have three independent mechanisms of action

Vancomycin has been prescribed by doctors for 60 years, and bacteria are only now developing resistance to it, according to Boger, who called vancomycin “magical” for its proven strength against infections. Previous studies by Boger and his colleagues at TSRI had shown that it is possible to add two modifications to vancomycin to make it even more potent. “With these modifications, you need less of the drug to have the same effect,” Boger said.

The new study shows that scientists can now make a third modification that interferes with a bacterium’s cell wall in a new way, with promising results. Combined with the previous modifications, this alteration gives vancomycin a 1,000-fold increase in activity, meaning doctors would need to use less of the antibiotic to fight infection.

The discovery makes this version of vancomycin the first antibiotic to have three independent mechanisms of action. “This increases the durability of this antibiotic,” said Boger. “Organisms just can’t simultaneously work to find a way around three independent mechanisms of action. Even if they found a solution to one of those, the organisms would still be killed by the other two.”

Tested against Enterococci bacteria, the new version of vancomycin killed both vancomycin-resistant Enterococci and the original forms of Enterococci. The next step in this research is to design a way to synthesize the modified vancomycin using fewer steps in the lab; the current method takes 30 steps.

What does the research team behind AlphaGo do next after winning the three-game match Saturday (May 27) against Ke Jie — the world’s top Go player — at the Future of Go Summit in Wuzhen, China?

“Throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials,” says DeepMind Technologies CEO Demis Hassabis.

Academic paper, Go teaching tool

But it’s “not the end of our work with the Go community,” he adds. “We plan to publish one final academic paper later this year that will detail the extensive set of improvements we made to the algorithms’ efficiency and potential to be generalised across a broader set of problems.”

Already in the works (with Jie’s collaboration): a teaching tool that “will show AlphaGo’s analysis of Go positions, providing an insight into how the program thinks, and hopefully giving all players and fans the opportunity to see the game through the lens of AlphaGo.”

Ke Jie plays the final match (credit: DeepMind)

DeepMind is also “publishing a special set of 50 AlphaGo vs AlphaGo games, played at full-length time controls, which we believe contain many new and interesting ideas and strategies.”

Deep Mind | The Future of Go Summit, Match Three: Ke Jie & AlphaGo

Deep Mind | Exploring the mysteries of Go with AlphaGo and China’s top players

DeepMind | Demis Hassabis on AlphaGo: its legacy and the ‘Future of Go Summit’ in Wuzhen, China

Schematic of a new kind of 3D printer that can print touch sensors directly on a model hand. (credit: Shuang-Zhuang Guo and Michael McAlpine/Advanced Materials )

Engineering researchers at the University of Minnesota have developed a process for 3D-printing stretchable, flexible, and sensitive electronic sensory devices that could give robots or prosthetic hands — or even real skin — the ability to mechanically sense their environment.

One major use would be to give surgeons the ability to feel during minimally invasive surgeries instead of using cameras, or to increase the sensitivity of surgical robots. The process could also make it easier for robots to walk and interact with their environment.

Printing electronics directly on human skin could be used for pulse monitoring, energy harvesting (of movements), detection of finger motions (on a keyboard or other devices), or chemical sensing (for example, by soldiers in the field to detect dangerous chemicals or explosives). Or imagine a future computer mouse built into your fingertip, with haptic touch on any surface.

“While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” said Michael McAlpine, a University of Minnesota mechanical engineering associate professor and lead researcher on the study.* “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.”

McAlpine and his team made the sensing fabric with a one-of-a kind 3D printer they built in the lab. The multifunctional printer has four nozzles to print the various specialized “inks” that make up the layers of the device — a base layer of silicone**, top and bottom electrodes made of a silver-based piezoresistive conducting ink, a coil-shaped pressure sensor, and a supporting layer that holds the top layer in place while it sets (later washed away in the final manufacturing process).

Surprisingly, all of the layers of “inks” used in the flexible sensors can set at room temperature. Conventional 3D printing using liquid plastic is too hot and too rigid to use on the skin. The sensors can stretch up to three times their original size.

The researchers say the next step is to move toward semiconductor inks and printing on a real surface. “The manufacturing is built right into the process, so it is ready to go now,” McAlpine said.

The research was published online in the journal Advanced Materials. It was funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health.

Abstract of 3D Printed Stretchable Tactile Sensors

The development of methods for the 3D printing of multifunctional devices could impact areas ranging from wearable electronics and energy harvesting devices to smart prosthetics and human–machine interfaces. Recently, the development of stretchable electronic devices has accelerated, concomitant with advances in functional materials and fabrication processes. In particular, novel strategies have been developed to enable the intimate biointegration of wearable electronic devices with human skin in ways that bypass the mechanical and thermal restrictions of traditional microfabrication technologies. Here, a multimaterial, multiscale, and multifunctional 3D printing approach is employed to fabricate 3D tactile sensors under ambient conditions conformally onto freeform surfaces. The customized sensor is demonstrated with the capabilities of detecting and differentiating human movements, including pulse monitoring and finger motions. The custom 3D printing of functional materials and devices opens new routes for the biointegration of various sensors in wearable electronics systems, and toward advanced bionic skin applications.

Which of these presentation methods make the robot look most real: live, VR, 3D TV, or 2D TV? (credit: Constanze Schreiner/University of Koblenz-Landau, Martina Mara/Ars Electronica Futuerlab, and Markus Appel/ University of Wurzburg)

How do you make humanoid robots look least creepy? With increasing use of industrial (and soon, service robots), it’s a good question.

Researchers at the University of Koblenz-Landau, University of Wurzburg, and Arts Electronica Futurelab decided to find out with an experiment. They created a skit with a human actor and the Roboy robot, and presented scripted human-robot interactions (HRIs), using four types of presentations: live, virtual reality (VR), 3D TV, and 2D TV. Participants saw Roboy assisting the human in organizing appointments, conducting web searches, and finding a birthday present for the human’s mother.

People who watched live interactions with the robot were most likely to consider the robot as real, followed by viewing the same interaction via VR. Robots presented in VR also scored high in human likeness, but lower than in the live presentation.

Last week, KurzweilAIreported that Google is rolling out an enhanced version of its “smart reply” machine-learning email software to “over 1 billion Android and iOS users of Gmail” — quoting Google CEO Sundar Pichai.

We noted that the new smart-reply version is now able to handle challenging sentences like “That interesting person at the cafe we like gave me a glance,” as Google research scientist Brian Strope and engineering director Ray Kurzweil noted in a Google Research blog post.

But “given enough examples of language, a machine learning approach can discover many of these subtle distinctions,” they wrote.

How does it work? “The content of language is deeply hierarchical, reflected in the structure of language itself, going from letters to words to phrases to sentences to paragraphs to sections to chapters to books to authors to libraries, etc.,” they explained.

So a hierarchical approach to learning “is well suited to the hierarchical nature of language. We have found that this approach works well for suggesting possible responses to emails. We use a hierarchy of modules, each of which considers features that correspond to sequences at different temporal scales, similar to how we understand speech and language.”*

Simplfying communication

“With Smart Reply, Google is assuming users want to offload the burdensome task of communicating with one another to our more efficient counterparts,” says Wired writer Liz Stinson.

“It’s not wrong. The company says the machine-generated replies already account for 12 percent of emails sent; expect that number to boom once everyone with the Gmail app can send one-tap responses.

“In the short term, that might mean more stilted conversations in your inbox. In the long term, the growing number of people who use these canned responses is only going to benefit Google, whose AI grows smarter with every email sent.”

Another challenge is that our emails, particularly from mobile devices, “tend to be riddled with idioms [such as urban lingo] that make no actual sense,” suggestsWashington Post writer Hayley Tsukayama. “Things change depending on context: Something ‘wicked’ could be good or very bad, for example. Not to mention, sarcasm is a thing.

“Which is all to warn you that you may still get a wildly random and even potentially inappropriate suggestion — I once got an ‘Oh no!’ suggestion to a friend’s self-deprecating pregnancy announcement, for example. If the email only calls for a one- or two-sentence response, you’ll probably find Smart Reply useful. If it requires any nuance, though, it’s still best to use your own human judgment.”

* The initial release of Smart Reply encoded input emails word-by-word with a long-short-term-memory (LSTM) recurrent neural network, and then decoded potential replies with yet another word-level LSTM. While this type of modeling is very effective in many contexts, even with Google infrastructure, it’s an approach that requires substantial computation resources. Instead of working word-by-word, we found an effective and highly efficient path by processing the problem more all-at-once, by comparing a simple hierarchy of vector representations of multiple features corresponding to longer time spans. — Brian Strope and Ray Kurzweil, Google Research Blog.

Julie Brefczynski-Lewis, a neuroscientist at West Virginia University, places a helmet-like PET scanner on a research subject. The mobile scanner enables studies of human interaction, movement disorders, and more. (credit: West Virginia University)

The new Ambulatory Microdose Positron Emission Tomography (AMPET) scanner allows research subjects to stand and move around as the device scans, instead of having to lie completely still and be administered anesthesia — making it impossible to find associations between movement and brain activity.

The AMPET scanner was developed by Julie Brefczynski-Lewis, a neuroscientist at West Virginia University (WVU), and Stan Majewski, a physicist at WVU and now at the University of Virginia. It could make possible new psychological and clinical studies on how the brain functions when affected by diseases from epilepsy to addiction, and during ordinary and dysfunctional social interactions.

Helmet support prototype with weighted helmet, allowing for freedom of movement. The counterbalance currently supports up to 10 kg but can be upgraded. Digitizing electronics will be mounted to the support above the patient. (credit: Samantha Melroy et al./Sensors)

Because AMPET sits so close to the brain, it can also “catch” more of the photons stemming from the radiotracers used in PET than larger scanners can. That means researchers can administer a lower dose of radioactive material and still get a good biological snapshot. Catching more signals also allows AMPET to create higher resolution images than regular PET.

The AMPET idea was sparked by the Rat Conscious Animal PET (RatCAP) scanner for studying rats at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory.** The scanner is a 250-gram ring that fits around the head of a rat, suspended by springs to support its weight and let the rat scurry about as the device scans. (credit: Brookhaven Lab)

The researchers plan to build a laboratory-ready version next.

Seeing more deeply into the brain

A patient or animal about to undergo a PET scan is injected with a low dose of a radiotracer — a radioactive form of a molecule that is regularly used in the body. These molecules emit anti-matter particles called positrons, which then manage to only travel a tiny distance through the body. As soon as one of these positrons meets an electron in biological tissue, the pair annihilates and converts their mass to energy. This energy takes the form of two high-energy light rays, called gamma photons, that shoot off in opposite directions. PET machines detect these photons and track their paths backward to their point of origin — the tracer molecule. By measuring levels of the tracer, for instance, doctors can map areas of high metabolic activity. Mapping of different tracers provides insight into different aspects of a patient’s health. (credit: Brookhaven Lab)

PET scans allow researchers to see farther into the body than other imaging tools. This lets AMPET reach deep neural structures while the research subjects are upright and moving. “A lot of the important things that are going on with emotion, memory, and behavior are way deep in the center of the brain: the basal ganglia, hippocampus, amygdala,” Brefczynski-Lewis notes.

“Currently we are doing tests to validate the use of virtual reality environments in future experiments,” she said. In this virtual reality, volunteers would read from a script designed to make the subject angry, for example, as his or her brain is scanned.

In the medical sphere, the scanning helmet could help explain what happens during drug treatments. Or it could shed light on movement disorders such as epilepsy, and watch what happens in the brain during a seizure; or study the sub-population of Parkinson’s patients who have great difficulty walking, but can ride a bicycle .

The RatCAP project at Brookhaven was funded by the DOE Office of Science. RHIC is a DOE Office of Science User Facility for nuclear physics research. Brookhaven Lab physicists use technology similar to PET scanners at the Relativistic Heavy Ion Collider (RHIC), where they must track the particles that fly out of near-light speed collisions of charged nuclei. PET research at the Lab dates back to the early 1960s and includes the creation of the first single-plane scanner as well as various tracer molecules.

Abstract of Development and Design of Next-Generation Head-Mounted Ambulatory Microdose Positron-Emission Tomography (AM-PET) System

Several applications exist for a whole brain positron-emission tomography (PET) brain imager designed as a portable unit that can be worn on a patient’s head. Enabled by improvements in detector technology, a lightweight, high performance device would allow PET brain imaging in different environments and during behavioral tasks. Such a wearable system that allows the subjects to move their heads and walk—the Ambulatory Microdose PET (AM-PET)—is currently under development. This imager will be helpful for testing subjects performing selected activities such as gestures, virtual reality activities and walking. The need for this type of lightweight mobile device has led to the construction of a proof of concept portable head-worn unit that uses twelve silicon photomultiplier (SiPM) PET module sensors built into a small ring which fits around the head. This paper is focused on the engineering design of mechanical support aspects of the AM-PET project, both of the current device as well as of the coming next-generation devices. The goal of this work is to optimize design of the scanner and its mechanics to improve comfort for the subject by reducing the effect of weight, and to enable diversification of its applications amongst different research activities.

The game results show that placing slightly “noisy” bots in a central location (high-degree nodes) improves human coordination by reducing same-color neighbor nodes (the goal of the game). Square nodes show the bots and round nodes show human players; thick red lines show color conflicts, which are reduced with bot participation (right). (credit: Hirokazu Shirado and Nicholas A. Christakis/Nature)

It’s not about artificial intelligence (AI) taking over — it’s about AI improving human performance, a new study by Yale University researchers has shown.

“Much of the current conversation about artificial intelligence has to do with whether AI is a substitute for human beings. We believe the conversation should be about AI as a complement to human beings,” said Nicholas Christakis, Yale University co-director of the Yale Institute for Network Science (YINS) and senior author of a study by Yale Institute for Network Science.*

AI doesn’t even have to be super-sophisticated to make a difference in people’s lives; even “dumb AI” can help human groups, based on the study, which appears in the May 18, 2017 edition of the journal Nature.

How bots can boost human performance

In a series of experiments using teams of human players and autonomous software agents (“bots”), the bots boosted the performance of human groups and the individual players, the researchers found.

The experiment design involved an online color-coordination game that required groups of people to coordinate their actions for a collective goal. The collective goal was for every node to have a color different than all of its neighbor nodes. The subjects were paid a US$2 show-up fee and a declining bonus of up to US$3 depending on the speed of reaching a global solution to the coordination problem (in which every player in a group had chosen a different color from their connected neighbors). When they did not reach a global solution within 5 min, the game was stopped and the subjects earned no bonus.

The human players also interacted with anonymous bots that were programmed with three levels of behavioral randomness — meaning the AI bots sometimes deliberately made mistakes (introduced “noise”). In addition, sometimes the bots were placed in different parts of the social network to try different strategies.

The result: The bots reduced the median time for groups to solve problems by 55.6%. The experiment also showed a cascade effect: People whose performance improved when working with the bots then influenced other human players to raise their game. More than 4,000 people participated in the experiment, which used Yale-developed software called breadboard.

The findings have implications for a variety of situations in which people interact with AI technology, according to the researchers. Examples include human drivers who share roadways with autonomous cars and operations in which human soldiers work in tandem with AI.

“There are many ways in which the future is going to be like this,” Christakis said. “The bots can help humans to help themselves.”

Practical business AI tools

One example: Salesforce CEO Marc Benioff uses a bot called Einstein to help him run his company, Business Intelligencereported Thursday (May 18, 2017).

“Powered by advanced machine learning, deep learning, predictive analytics, natural language processing and smart data discovery, Einstein’s models will be automatically customised for every single customer,” according to the Salesforce blog. “It will learn, self-tune and get smarter with every interaction and additional piece of data. And most importantly, Einstein’s intelligence will be embedded within the context of business, automatically discovering relevant insights, predicting future behavior, proactively recommending best next actions and even automating tasks.”

Benioff says he also uses a version called Einstein Guidance for forecasting and modeling. It even helps end internal politics at executive meetings, calling out under-performing executives.

“AI is the next platform. All future apps for all companies will be built on AI,” Benioff predicts.

* Christakis is a professor of sociology, ecology & evolutionary biology, biomedical engineering, and medicine at Yale. Grants from the Robert Wood Johnson Foundation and the National Institute of Social Sciences supported the research.

Coordination in groups faces a sub-optimization problem and theory suggests that some randomness may help to achieve global optima. Here we performed experiments involving a networked colour coordination game in which groups of humans interacted with autonomous software agents (known as bots). Subjects (n = 4,000) were embedded in networks (n = 230) of 20 nodes, to which we sometimes added 3 bots. The bots were programmed with varying levels of behavioural randomness and different geodesic locations. We show that bots acting with small levels of random noise and placed in central locations meaningfully improve the collective performance of human groups, accelerating the median solution time by 55.6%. This is especially the case when the coordination problem is hard. Behavioural randomness worked not only by making the task of humans to whom the bots were connected easier, but also by affecting the gameplay of the humans among themselves and hence creating further cascades of benefit in global coordination in these heterogeneous systems.

Technology developed by Princeton University computer scientists may do for audio recordings of the human voice what word processing software did for the written word and Adobe Photoshop did for images.

“VoCo” software, still in the research stage, makes it easy to add or replace a word in an audio recording of a human voice by simply editing a text transcript of the recording. New words are automatically synthesized in the speaker’s voice — even if they don’t appear anywhere else in the recording.

The system uses a sophisticated algorithm to learn and recreate the sound of a particular voice. It could one day make editing podcasts and narration in videos much easier, or in the future, create personalized robotic voices that sound natural, according to co-developer Adam Finkelstein, a professor of computer science at Princeton. Or people who have lost their voices due to injury or disease might be able to recreate their voices through a robotic system, but one that sounds natural.

An earlier version of VoCo was announced in November 2016. A paper describing the current VoCo development will be published in the July issue of the journal Transactions on Graphics (an open-access preprint is available).

How it works (technical description)

VoCo allows people to edit audio recordings with the ease of changing words on a computer screen. The system inserts new words in the same voice as the rest of the recording. (credit: Professor Adam Finkelstein)

VoCo’s user interface looks similar to other audio editing software such as the podcast editing program Audacity, with a waveform of the audio track and cut, copy and paste tools for editing. But VoCo also augments the waveform with a text transcript of the track and allows the user to replace or insert new words that don’t already exist in the track by simply typing in the transcript. When the user types the new word, VoCo updates the audio track, automatically synthesizing the new word by stitching together snippets of audio from elsewhere in the narration.

VoCo is is based on an optimization algorithm that searches the voice recording and chooses the best possible combinations of phonemes (partial word sounds) to build new words in the user’s voice. To do this, it needs to find the individual phonemes and sequences of them that stitch together without abrupt transitions. It also needs to be fitted into the existing sentence so that the new word blends in seamlessly. Words are pronounced with different emphasis and intonation depending on where they fall in a sentence, so context is important.

Advanced VoCo editors can manually adjust pitch profile, amplitude and snippet duration. Novice users can choose from a predefined set of pitch profiles (bottom), or record their own voice as an exemplar to control pitch and timing (top). (credit: Professor Adam Finkelstein)

For clues about this context, VoCo looks to an audio track of the sentence that is automatically synthesized in artificial voice from the text transcript — one that sounds robotic to human ears. This recording is used as a point of reference in building the new word. VoCo then matches the pieces of sound from the real human voice recording to match the word in the synthesized track — a technique known as “voice conversion,” which inspired the project name, VoCo.

In case the synthesized word isn’t quite right, VoCo offers users several versions of the word to choose from. The system also provides an advanced editor to modify pitch and duration, allowing expert users to further polish the track.

To test how effective their system was a producing authentic sounding edits, the researchers asked people to listen to a set of audio tracks, some of which had been edited with VoCo and other that were completely natural. The fully automated versions were mistaken for real recordings more than 60 percent of the time.

The Princeton researchers are currently refining the VoCo algorithm to improve the system’s ability to integrate synthesized words more smoothly into audio tracks. They are also working to expand the system’s capabilities to create longer phrases or even entire sentences synthesized from a narrator’s voice.

A key use for VoCo might be in intelligent personal assistants like Apple’s Siri, Google Assistant, Amazon’s Alexa, and Microsoft’s Cortana, or for using movie actors’ voices from old films in new ones, Finkelstein suggests.

But there are obvious concerns about fraud. It might even be possible to create a convincing fake video. Video clips with different facial expressions and lip movements (using Disney Research’s FaceDirector, for example) could be edited in and matched to associated fake words and other audio (such as background noise and talking), along with green screen to create fake backgrounds.

With billions of people now getting their news online and unfiltered, augmented-reality coming, and hacking way out of control, things may get even weirder. …

Zeyu Jin, a Princeton graduate student advised by Finkelstein, will present the work at the Association for Computing Machinery SIGGRAPH conference in July. The work at Princeton was funded by the Project X Fund, which provides seed funding to engineers for pursuing speculative projects. The Princeton researchers collaborated with scientists Gautham Mysore, Stephen DiVerdi, and Jingwan Lu at Adobe Research. Adobe has not announced availability of a commercial version of VoCo, or plans to integrate VoCo into Adobe Premiere Pro (or FaceDirector).

Abstract of VoCo: Text-based Insertion and Replacement in Audio Narration

Editing audio narration using conventional software typically involves many painstaking low-level manipulations. Some state of the art systems allow the editor to work in a text transcript of the narration, and perform select, cut, copy and paste operations directly in the transcript; these operations are then automatically applied to the waveform in a straightforward manner. However, an obvious gap in the text-based interface is the ability to type new words not appearing in the transcript, for example inserting a new word for emphasis or replacing a misspoken word. While high-quality voice synthesizers exist today, the challenge is to synthesize the new word in a voice that matches the rest of the narration. This paper presents a system that can synthesize a new word or short phrase such that it blends seamlessly in the context of the existing narration. Our approach is to use a text to speech synthesizer to say the word in a generic voice, and then use voice conversion to convert it into a voice that matches the narration. Offering a range of degrees of control to the editor, our interface supports fully automatic synthesis, selection among a candidate set of alternative pronunciations, fine control over edit placements and pitch profiles, and even guidance by the editors own voice. The paper presents studies showing that the output of our method is preferred over baseline methods and often indistinguishable from the original voice.

Smart Reply suggests up to three replies to an email message — saving you typing time, or giving you time to think through a better reply. Smart Reply was previously only available to users of Google Inbox (an app that helps Gmail users organize their email messages and reply efficiently).

Hierarchical model

Developed by a team headed by Ray Kurzweil, a Google director of engineering, “the new version of Smart Reply increases the percentage of usable suggestions and is much more algorithmically efficient than the original system,” said Kurzweil in a Google Research blog post with research colleague Brian Strope today. “And that efficiency now makes it feasible for us to provide Smart Reply for Gmail.”

For example, a sentence like “That interesting person at the cafe we like gave me a glance” is difficult to interpret. Was it a positive or negative gesture? But “given enough examples of language, a machine learning approach can discover many of these subtle distinctions,” they write.

]]>9http://www.kurzweilai.net/?p=3006012017-05-18T00:07:26Z2017-05-15T17:51:36ZUF Soft Matter | Silicone is 3D-printed into the micro-organogel support material. The printing nozzle follows a predefined trajectory, depositing liquid silicone in its wake. The liquid silicone is supported by the micro-organogel material during this printing process.

University of Florida (UF) researchers have developed a method for 3D printing soft-silicone medical implants that are stronger, quicker, less expensive, more flexible, and more comfortable than the implants currently available. That should be good news for the millions of people every year who need medical devices implanted.

Currently, such devices — such as ports for draining bodily fluids (cerebral spinal fluid in hydrocephalus, for example), implantable bands, balloons, soft catheters, slings and meshes — are mass produced and made through molding processes. To create customized parts for individual patients with molding would be very expensive and could take days or weeks for each job.

The 3D printing method cuts that time to hours, potentially saving lives.

The ability to easily replace silicone implants at low cost is especially important for children, where “implants may need to be replaced frequently as they grow up,” Thomas E. Angelini, an associate professor of mechanical engineering of the UF Department of Mechanical and Aerospace Engineering, explained to KurzweilAI. Angelini is senior author of a paper published May 10, 2017 in the open-access journal Science Advances.

The research could also pave the way for new therapeutic devices that encapsulate and control the release of drugs or small molecules for guiding tissue regeneration or assisting diseased organs, such as the pancreas or prostate, according to lead author Christopher O’Bryan, a UF mechanical and aerospace engineering doctoral student.

UF Soft Matter | Water is pumped from one reservoir to another using a 3D-printed silicone valve. The silicone valve contains two encapsulated ball valves that allow water to be pumped through the valve by squeezing the lower chamber. The silicone valve demonstrates the ability of the UF 3D-printing method to create multiple encapsulated components in a single part — something that cannot be done with a traditional 3D-printing approach.

The widespread prevalence of commercial products made from microgels illustrates the immense practical value of harnessing the jamming transition; there are countless ways to use soft, solid materials that fluidize and become solid again with small variations in applied stress. The traditional routes of microgel synthesis produce materials that predominantly swell in aqueous solvents or, less often, in aggressive organic solvents, constraining ways that these exceptionally useful materials can be used. For example, aqueous microgels have been used as the foundation of three-dimensional (3D) bioprinting applications, yet the incompatibility of available microgels with nonpolar liquids, such as oils, limits their use in 3D printing with oil-based materials, such as silicone. We present a method to make micro-organogels swollen in mineral oil, using block copolymer self-assembly. The rheological properties of this micro-organogel material can be tuned, leveraging the jamming transition to facilitate its use in 3D printing of silicone structures. We find that the minimum printed feature size can be controlled by the yield stress of the micro-organogel medium, enabling the fabrication of numerous complex silicone structures, including branched perfusable networks and functional fluid pumps.

Virtual reality (VR) technology can be an effective part of treatment for phobias, post-traumatic stress disorder (PTSD) in combat veterans, and other mental health conditions, according to an open-access research review in the May/June issue of the Harvard Review of Psychiatry.

VR allows providers to “create computer-generated environments in a controlled setting, which can be used to create a sense of presence and immersion in the feared environment for individuals suffering from anxiety disorders,” says lead author Jessica L. Maples-Keller, PhD, of University of Georgia.

One dramatic example is progressive exposure to frightening situations in patients with specific phobias, such as fear of flying. This typically includes eight steps, from walking through an airport terminal to flying during a thunderstorm with turbulence, including specific stimuli linked to these symptoms (such as the sound of the cabin door closing). The patient can virtually experience repeated takeoffs and landings without going on an actual flight.

VR can simulate exposures that would be costly or impractical to recreate in real life, such as combat conditions, or to control the “dose” and specific aspects of the exposure environment.

“A VR system will typically include a head-mounted display and a platform (for the patients) and a computer with two monitors — one for the provider’s interface in which he or she constructs the exposure in real time, and another for the provider’s view of the patient’s position in the VR environment,” the researchers note.

However, research so far on VR applications has had limitations, including small numbers of patients and lack of comparison groups; and mental health care providers will need specific training, the authors warn.

Abstract of The Use of Virtual Reality Technology in the Treatment of Anxiety and Other Psychiatric Disorders

Virtual reality (VR) allows users to experience a sense of presence in a computer-generated, three-dimensional environment. Sensory information is delivered through a head-mounted display and specialized interface devices. These devices track head movements so that the movements and images change in a natural way with head motion, allowing for a sense of immersion. VR, which allows for controlled delivery of sensory stimulation via the therapist, is a convenient and cost-effective treatment. This review focuses on the available literature regarding the effectiveness of incorporating VR within the treatment of various psychiatric disorders, with particular attention to exposure-based intervention for anxiety disorders. A systematic literature search was conducted in order to identify studies implementing VR-based treatment for anxiety or other psychiatric disorders. This article reviews the history of the development of VR-based technology and its use within psychiatric treatment, the empirical evidence for VR-based treatment, and the benefits for using VR for psychiatric research and treatment. It also presents recommendations for how to incorporate VR into psychiatric care and discusses future directions for VR-based treatment and clinical research.

]]>4http://www.kurzweilai.net/?p=3005882017-05-17T03:35:17Z2017-05-13T00:42:20ZThe Moogfest four-day festival in Durham, North Carolina next weekend (May 18 — 21) explores the future of technology, art, and music. Here are some of the sessions that may be especially interesting to KurzweilAI readers. Full #Moogfest2017 Program Lineup.

Culture and Technology

(credit: Google)

The Magenta by Google Brain team will bring its work to life through an interactive demo plus workshops on the creation of art and music through artificial intelligence.

Magenta is a Google Brain project to ask and answer the questions, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” It’s first a research project to advance the state-of-the art and creativity in music, video, image and text generation and secondly, Magenta is building a community of artists, coders, and machine learning researchers.

The interactive demo will go through a improvisation along with the machine learning models, much like the Al Jam Session. The workshop will cover how to use the open source library to build and train models and interact with them via MIDI.

Miguel A. L. Nicolelis, MD, PhD will discuss state-of-the-art research on brain-machine interfaces, which make it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices. He will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices.

He will also explore research that raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject’s own body.

Theme: Transhumanism

Dervishes at Royal Opera House with Matthew Herbert (credit: ?)

Andy Cavatorta (MIT Media Lab) will present a conversation and workshop on a range of topics including the four-century history of music and performance at the forefront of technology. Known as the inventor of Bjork’s Gravity Harp, he has collaborated on numerous projects to create instruments using new technologies that coerce expressive music out of fire, glass, gravity, tiny vortices, underwater acoustics, and more. His instruments explore technologically mediated emotion and opportunities to express the previously inexpressible.

Csound is a sound and music computing system originally developed at MIT Media Lab and can most accurately be described as a compiler or a software that takes textual instructions in the form of source code and converts them into object code which is a stream of numbers representing audio. Although it has a strong tradition as a tool for composing electro-acoustic pieces, it is used by composers and musicians for any kind of music that can be made with the help of the computer and has traditionally being used in a non-interactive score driven context, but nowadays it is mostly used in in a real-time context.

Michael Bierylo serves as the Chair of the Electronic Production and Design Department, which offers students the opportunity to combine performance, composition, and orchestration with computer, synthesis, and multimedia technology in order to explore the limitless possibilities of musical expression.

Berklee College of Music | Electronic Production and Design (EPD) at Berklee College of Music

Chris Ianuzzi (credit: William Murray)

Chris Ianuzzi, a synthesist of Ciani-Musica and past collaborator with pioneers such as Vangelis and Peter Baumann, will present a daytime performance and sound exploration workshops with the B11 braininterface and NeuroSky headset–a Brainwave Sensing Headset.

Theme: Hacking Systems

Argus Project (credit: Moogfest)

The Argus Project fromGan Golan andRon Morrison of NEW INCis a wearable sculpture, video installation and counter-surveillance training, which directly intersects the public debate over police accountability. According to ancient Greek myth, Argus Panoptes was a giant with 100 eyes who served as an eternal watchman, both for – and against – the gods.

By embedding an array of camera “eyes” into a full body suit of tactical armor, the Argus exo-suit creates a “force field of accountability” around the bodies of those targeted. While some see filming the police as a confrontational or subversive act, it is in fact, a deeply democratic one. The act of bearing witness to the actions of the state – and showing them to the world – strengthens our society and institutions. The Argus Project is not so much about an individual hero, but the Citizen Body as a whole. In between one of the music acts, a presentation about the project will be part of the Protest Stage.

Argus Exo Suit Design (credit: Argus Project)

Theme: Protest

Found Sound Nation (credit: Moogfest)

Democracy’s Exquisite Corpse from Found Sound Nation and Moogfest, an immersive installation housed within a completely customized geodesic dome, is a multi-person instrument and music-based round-table discussion. Artists, activists, innovators, festival attendees and community engage in a deeply interactive exploration of sound as a living ecosystem and primal form of communication.

Within the dome, there are 9 unique stations, each with their own distinct set of analog or digital sound-making devices. Each person’s set of devices is chained to the person sitting next to them, so that everybody’s musical actions and choices affect the person next to them, and thus affect everyone else at the table. This instrument is a unique experiment in how technology and the instinctive language of sound can play a role in the shaping of a truly collective unconscious.

Theme: Protest

(credit: Land Marking)

Land Marking, fromHalsey Burgund and Joe Zibkow of MIT Open Doc Lab, is a mobile-based music/activist project that augments the physical landscape of protest events with a layer of location-based audio contributed by event participants in real-time. The project captures the audioscape and personal experiences of temporary, but extremely important, expressions of discontent and desire for change.

Land Marking will be teaming up with the Protest Stage to allow Moogfest attendees to contribute their thoughts on protests and tune into an evolving mix of commentary and field recordings from others throughout downtown Durham. Land Marking is available on select apps.

Theme: Protest

Taeyoon Choi (credit: Moogfest)

Taeyoon Choi, an artist and educator based in New York and Seoul, who will be leading a Sign Making Workshopas one of the Future Thought leaders on the Protest Stage. His art practice involves performance, electronics, drawings and storytelling that often leads to interventions in public spaces.

Taeyoon will also participate in the Handmade Computer workshop to build a1 Bit Computer, which demonstrates how binary numbers and boolean logic can be configured to create more complex components. On their own these components aren’t capable of computing anything particularly useful, but a computer is said to be Turing complete if it includes all of them, at which point it has the extraordinary ability to carry out any possible computation. He has participated in numerous workshops at festivals around the world, from Korea to Scotland, but primarily at the School for Poetic Computation (SFPC) — an artist run school co-founded by Taeyoon in NYC. Taeyoon Choi’s Handmade Computer projects.

Theme: Protest

(credit: Moogfest)

irlbb fromVivan Thi Tang, connects individuals after IRL (in real life) interactions and creates community that otherwise would have been missed. With a customized beta of the app for Moogfest 2017, irlbb presents a unique engagement opportunity.

Theme: Protest

Ryan Shaw and Michael Clamann (credit: Duke University)

Duke Professors Ryan Shaw, and Michael Clamann will lead adaily science pub talk series on topics that include future medicine, humans and anatomy, and quantum physics.

Ryan is a pioneer in mobile health—the collection and dissemination of information using mobile and wireless devices for healthcare–working with faculty at Duke’s Schools of Nursing, Medicine and Engineering to integrate mobile technologies into first-generation care delivery systems. These technologies afford researchers, clinicians, and patients a rich stream of real-time information about individuals’ biophysical and behavioral health in everyday environments.

Michael Clamann is a Senior Research Scientist in the Humans and Autonomy Lab (HAL) within the Robotics Program at Duke University, an Associate Director at UNC’s Collaborative Sciences Center for Road Safety, and the Lead Editor for Robotics and Artificial Intelligence for Duke’s SciPol science policy tracking website. In his research, he works to better understand the complex interactions between robots and people and how they influence system effectiveness and safety.

Theme: Hacking Systems

Dave Smith (credit: Moogfest)

Dave Smith, the iconic instrument innovator and Grammy-winner, will lead Moogfest’s Instruments Innovators program and host a headlining conversation with a leading artist revealed in next week’s release. He will also host a masterclass.

As the original founder of Sequential Circuits in the mid-70s and Dave designed the Prophet-5––the world’s first fully-programmable polyphonic synth and the first musical instrument with an embedded microprocessor. From the late 1980’s through the early 2000’s he has worked to develop next level synths with the likes of the Audio Engineering Society, Yamaha, Korg, Seer Systems (for Intel). Realizing the limitations of software, Dave returned to hardware and started Dave Smith Instruments (DSI), which released the Evolver hybrid analog/digital synthesizer in 2002. Since then the DSI product lineup has grown to include the Prophet-6, OB-6, Pro 2, Prophet 12, and Prophet ’08 synthesizers, as well as the Tempest drum machine, co-designed with friend and fellow electronic instrument designer Roger Linn.

Driven by the creative and technological vision of electronic music pioneer Dave Rossum, Rossum Electro-Music creates uniquely powerful tools for electronic music production and is the culmination of Dave’s 45 years designing industry-defining instruments and transformative technologies. Starting with his co-founding of E-mu Systems, Dave provided the technological leadership that resulted in what many consider the premier professional modular synthesizer system–E-mu Modular System–which became an instrument of choice for numerous recording studios, educational institutions, and artists as diverse as Frank Zappa, Leon Russell, and Hans Zimmer. In the following years, worked on developing Emulator keyboards and racks (i.e. Emulator II), Emax samplers, the legendary SP-12 and SP-1200 (sampling drum machines), the Proteus sound modules and the Morpheus Z-Plane Synthesizer.

Gerhard Behles co-founded Ableton in 1999 with Robert Henke and Bernd Roggendorf. Prior to this he had been part of electronic music act “Monolake” alongside Robert Henke, but his interest in how technology drives the way music is made diverted his energy towards developing music software. He was fascinated by how dub pioneers such as King Tubby ‘played’ the recording studio, and began to shape this concept into a music instrument that became Ableton Live.

LZX Industries was born in 2008 out of the Synth DIY scene when Lars Larsen of Denton, Texas and Ed Leckie of Sydney, Australia began collaborating on the development of a modular video synthesizer. At that time, analog video synthesizers were inaccessible to artists outside of a handful of studios and universities. It was their continuing mission to design creative video instruments that (1) stay within the financial means of the artists who wish to use them, (2) honor and preserve the legacy of 20th century toolmakers, and (3) expand the boundaries of possibility. Since 2015, LZX Industries has focused on the research and development of new instruments, user support, and community building.

The program will include a “Virtual Visit” to the Large Hadron Collider — the world’s largest and most powerful particle accelerator — via a live video session, a ½ day workshop analyzing and understanding LHC data, and a “Science Fiction versus Science Fact” live debate.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

“Atlas Boogie” (referencing Higgs Boson):

ATLAS Experiment | The ATLAS Boogie

(credit: Kate Shaw)

Kate Shaw (ATLAS @ CERN), PhD, in her keynote, titled “Exploring the Universe and Impacting Society Worldwide with the Large Hadron Collider (LHC) at CERN,” will dive into the present-day and future impacts of the LHC on society. She will also share findings from the work she has done promoting particle physics in developing countries through her Physics without Frontiers program.

The ATLAS experiment is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides. Physicists test the predictions of the Standard Model, which encapsulates our current understanding of what the building blocks of matter are and how they interact – resulting in one such discoveries as the Higgs boson. By pushing the frontiers of knowledge it seeks to answer to fundamental questions such as: What are the basic building blocks of matter? What are the fundamental forces of nature? Could there be a greater underlying symmetry to our universe?

Theme: Future Thought

Arecibo (credit: Joe Davis/MIT)

In his keynote, Joe Davis (MIT) will trace the history of several projects centered on ideas about extraterrestrial communications that have given rise to new scientific techniques and inspired new forms of artistic practice. He will present his “swansong” — an interstellar message that is intended explicitly for human beings rather than for aliens.

Theme: Future Thought

Immortality bus (credit: Zoltan Istvan)

Zoltan Istvan (Immortality Bus), the former U.S. Presidential candidate for the Transhumanist party and leader of the Transhumanist movement, will explore the path to immortality through science with the purpose of using science and technology to radically enhance the human being and human experience. His futurist work has reached over 100 million people–some of it due to the Immortality Bus which he recently drove across America with embedded journalists aboard. The bus is shaped and looks like a giant coffin to raise life extension awareness.

Marc Fleury and members of the Church of Space — Park Krausen, Ingmar Koch, and Christ of Veillon — return to Moogfest for a second year to present an expanded and varied program with daily explorations in modern physics with music and the occult, Illuminati performances, theatrical rituals to ERIS, and a Sunday Mass in their own dedicated “Church” venue.

“The human species will have to populate a new planet within 100 years if it is to survive,” famed physicist Stephen Hawking, PhD says in “Expedition New Earth”— a documentary that debuts this summer as part of the BBC’s forthcoming Tomorrow’s World TV series.

He cites “climate change, overdue asteroid strikes, epidemics and population growth” as reasons to leave.

That 100 figure is dramatically lower that Hawking’s previous warning of 1,000 years in a speech November 15, 2016 at Oxford Union, according to the London-based Express newspaper. “We must continue to go into space for the future of humanity,” he said.

“Deep Photo Style Transfer” is a cool new artificial-intelligence image-editing software tool that lets you transfer a style from another (“reference”) photo onto your own photo, as shown in the above examples.

An open-access arXiv paper by Cornell University computer scientists and Adobe collaborators explains that the tool can transpose the look of one photo (such as the time of day, weather, season, and artistic effects) onto your photo, making it reminiscent of a painting, but that is still photorealistic.

The algorithm also handles extreme mismatch of forms, such as transferring a fireball to a perfume bottle. (credit: Fujun Luan et al.)

“What motivated us is the idea that style could be imprinted on a photograph, but it is still intrinsically the same photo, said Cornell computer science professor Kavita Bala. “This turned out to be incredibly hard. The key insight finally was about preserving boundaries and edges while still transferring the style.”

To do that, the researchers created deep-learning software that can add a neural network layer that pays close attention to edges within the image, like the border between a tree and a lake.

This research is supported by a Google Faculty Re-search Award and NSF awards.

Abstract of Deep Photo Style Transfer

This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.

Korean researchers have designed a “smart contact lens” that may one day allow patients with diabetes and glaucoma to self-monitor blood glucose levels and internal eye pressure.*

The study was conducted by researchers at Ulsan National Institute of Science and Technology (UNIST) and Kyungpook National University School of Medicine, both of South Korea.

Most previously reported contact lens sensors can only monitor a single analyte (such as glucose) at a time, and generally obstruct the field of vision of the subject.

The design is based on transparent, stretchable sensors that are deposited on commercially available soft-contact lenses.

Electrodes based on a hybrid graphene-silver nanowire material can measure glucose in tears. Internal eye pressure changes are measured by a sandwich structure whose electronic characteristics are modified by pressure.

Inductive coupling — batteries not required

Both of these readings are transmitted wirelessly using “inductive coupling” (similar to remote charging of batteries), so no connected power source, such as a battery, is required. This design also allows for 24-hour real-time monitoring by patients.

The researchers conducted in-vivo and in-vitro performance tests using a live rabbit and bovine eyeball.

The team expects that the research could also lead to developing biosensors capable of detecting and treating various other human diseases, or used as a component in other biomedical devices.

The study results were published in the March issue of the journal Nature Communications. The study was supported by the 2017 CooperVision Science and Technology (S&T) Awards Program.

* Diabetes is the most common cause of high blood sugar levels. Intraocular pressure is the largest risk factor for glaucoma, a leading cause of human blindness.

How the smart contact lens works

Schematic of the top portion of the wearable contact-lens sensor. Left: antenna. Insert: Glucose sensor, based on a field-effect transistor (FET), which consists of a graphene channel and graphene/silver nanowire for source/drain. Not shown: chromium/gold interconnect, epoxy layer, and lens (below). (credit: UNIST)

Real-time glucose sensing with graphene/silver hybrid nanostructures. For selective and sensitive detection of glucose, glucose oxidase (GOD) catalyzes oxidation of glucose to gluconic acid and reduction of water to hydrogen peroxide, which produces oxygen, protons and electrons. The concentration of charge carriers in the FET channel, and thus the drain current, increases at higher concentration of glucose. (credit: UNIST)

The FET sensor (right) is modeled as an electrical RLC resonant circuit, comprised of the resistance (R) of the graphene channel, the inductance (L) of the antenna coil made of the graphene-AgNW hybrid, and the capacitance (C) of graphene-AgNW hybrid S/D electrodes. Wireless operation is achieved by mutually coupling the sensor antenna (center) with an external reader antenna (left) at a resonant frequency of 4.1 GHz. (credit: UNIST)

Schematic of intraocular pressure monitoring. A layer of silicone elastomer was placed between the two inductive spirals made of graphene-AgNW hybrid electrodes in a sandwich structure. The contact lens sensor responds to raised intraocular pressure (ocular hypertension), which increases the corneal radius of curvature, which in turn increases both the capacitance by thinning the dielectric and the inductance by bi-axial lateral expansion of the spiral coils. As a result, ocular hypertension shifts the reflection spectra of the spiral antenna to a lower frequency. (credit: UNIST)

Wearable contact lenses which can monitor physiological parameters have attracted substantial interests due to the capability of direct detection of biomarkers contained in body fluids. However, previously reported contact lens sensors can only monitor a single analyte at a time. Furthermore, such ocular contact lenses generally obstruct the field of vision of the subject. Here, we developed a multifunctional contact lens sensor that alleviates some of these limitations since it was developed on an actual ocular contact lens. It was also designed to monitor glucose within tears, as well as intraocular pressure using the resistance and capacitance of the electronic device. Furthermore, in-vivo and in-vitro tests using a live rabbit and bovine eyeball demonstrated its reliable operation. Our developed contact lens sensor can measure the glucose level in tear fluid and intraocular pressure simultaneously but yet independently based on different electrical responses.