Tag Archives: supercomputers

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.

SpiNNaker

At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

As I understand it, Andre Geim one of the two men (the other was Konstantin Novoselov) to first isolate graphene from a block of graphite by using sticky tape is not thrilled that it’s known in some quarters as the graphene sticky tape method. Still, the technique caught the imagination as Steve Connor’s March 18, 2013 article for the Independent made clear.

Scientists at UCL [University College London] have explained for the first time the mystery of why adhesive tape is so useful for graphene production.

The study, published in Advanced Materials (“Graphene–Graphene Interactions: Friction, Superlubricity, and Exfoliation”), used supercomputers to model the process through which graphene sheets are exfoliated from graphite, the material in pencils.

There are various methods for exfoliating graphene, including the famous adhesive tape method developed by Nobel Prize winner Andre Geim. However little has been known until now about how the process of exfoliating graphene using sticky tape works.

Academics at UCL are now able to demonstrate how individual flakes of graphite can be exfoliated to make one atom thick layers. They also reveal that the process of peeling a layer of graphene demands 40% less energy than that of another common method called shearing. This is expected to have far reaching impacts for the commercial production of graphene.

“The sticky tape method works rather like peeling egg boxes apart with a vertical motion, it is easier than pulling one horizontally across another when they are neatly stacked,” explained Professor Peter Coveney, Director of the Centre for Computational Science (UCL Chemistry).

“If shearing, then you get held up by this egg carton configuration. But if you peel, you can get them apart much more easily. The polymethyl methacrylate adhesive on traditional sticky tape is ideal for picking up the edge of the graphene sheet so it can be lifted and peeled,” added Professor Coveney.

Graphite occurs naturally, its basic crystalline structure is stacks of flat sheets of strongly bonded carbon atoms in a honeycomb pattern. Graphite’s many layers are bound together by weak interactions and can easily slide large distances over one another with little friction due to their superlubricity.

The scientists at UCL simulated an experiment conducted in 2015 at Lawrence Berkeley Laboratory in Berkeley, California, which used a special microscope with atomic resolution to see how graphene flakes move around on a graphite surface.

The supercomputer’s results matched Berkeley’s observations showing that there is less movement when the graphene atoms neatly line up with the atoms below.

“Despite the vast amount of research carried out on graphene since its discovery, it is clear that until now our understanding of its behaviour on an atomic length scale was very poor,” explains PhD student Robert Sinclair (UCL Chemistry).

“The one reason above all others why the material is difficult to use is because it is hard to make. Even now, a dozen years after its discovery, companies have to apply sticky tape methods to pull it apart, as the Laureates did to uncover it; hardly a hi-tech and industrially simple process to implement. We’re now in a position to assist experimentalists to figure out how to prise it apart, or make it to order. That could have big cost implications for the emerging graphene industry,” said Professor Coveney.

From gene mapping to space exploration, humanity continues to generate ever-larger sets of data—far more information than people can actually process, manage, or understand.

Machine learning systems can help researchers deal with this ever-growing flood of information. Some of the most powerful of these analytical tools are based on a strange branch of geometry called topology, which deals with properties that stay the same even when something is bent and stretched every which way.

Such topological systems are especially useful for analyzing the connections in complex networks, such as the internal wiring of the brain, the U.S. power grid, or the global interconnections of the Internet. But even with the most powerful modern supercomputers, such problems remain daunting and impractical to solve. Now, a new approach that would use quantum computers to streamline these problems has been developed by researchers at [Massachusetts Institute of Technology] MIT, the University of Waterloo, and the University of Southern California [USC}.

… Seth Lloyd, the paper’s lead author and the Nam P. Suh Professor of Mechanical Engineering, explains that algebraic topology is key to the new method. This approach, he says, helps to reduce the impact of the inevitable distortions that arise every time someone collects data about the real world.

In a topological description, basic features of the data (How many holes does it have? How are the different parts connected?) are considered the same no matter how much they are stretched, compressed, or distorted. Lloyd [ explains that it is often these fundamental topological attributes “that are important in trying to reconstruct the underlying patterns in the real world that the data are supposed to represent.”

It doesn’t matter what kind of dataset is being analyzed, he says. The topological approach to looking for connections and holes “works whether it’s an actual physical hole, or the data represents a logical argument and there’s a hole in the argument. This will find both kinds of holes.”

Using conventional computers, that approach is too demanding for all but the simplest situations. Topological analysis “represents a crucial way of getting at the significant features of the data, but it’s computationally very expensive,” Lloyd says. “This is where quantum mechanics kicks in.” The new quantum-based approach, he says, could exponentially speed up such calculations.

Lloyd offers an example to illustrate that potential speedup: If you have a dataset with 300 points, a conventional approach to analyzing all the topological features in that system would require “a computer the size of the universe,” he says. That is, it would take 2300 (two to the 300th power) processing units — approximately the number of all the particles in the universe. In other words, the problem is simply not solvable in that way.

“That’s where our algorithm kicks in,” he says. Solving the same problem with the new system, using a quantum computer, would require just 300 quantum bits — and a device this size may be achieved in the next few years, according to Lloyd.

“Our algorithm shows that you don’t need a big quantum computer to kick some serious topological butt,” he says.

There are many important kinds of huge datasets where the quantum-topological approach could be useful, Lloyd says, for example understanding interconnections in the brain. “By applying topological analysis to datasets gleaned by electroencephalography or functional MRI, you can reveal the complex connectivity and topology of the sequences of firing neurons that underlie our thought processes,” he says.

The same approach could be used for analyzing many other kinds of information. “You could apply it to the world’s economy, or to social networks, or almost any system that involves long-range transport of goods or information,” says Lloyd, who holds a joint appointment as a professor of physics. But the limits of classical computation have prevented such approaches from being applied before.

While this work is theoretical, “experimentalists have already contacted us about trying prototypes,” he says. “You could find the topology of simple structures on a very simple quantum computer. People are trying proof-of-concept experiments.”

Ignacio Cirac, a professor at the Max Planck Institute of Quantum Optics in Munich, Germany, who was not involved in this research, calls it “a very original idea, and I think that it has a great potential.” He adds “I guess that it has to be further developed and adapted to particular problems. In any case, I think that this is top-quality research.”

Shown here are the connections between different regions of the brain in a control subject (left) and a subject under the influence of the psychedelic compound psilocybin (right). This demonstrates a dramatic increase in connectivity, which explains some of the drug’s effects (such as “hearing” colors or “seeing” smells). Such an analysis, involving billions of brain cells, would be too complex for conventional techniques, but could be handled easily by the new quantum approach, the researchers say. Courtesy of the researchers

I think there’s some machine translation at work in the Aug. 27, 2015 news item about Hector Barron Escobar on Azonano,

By using supercomputers the team creates virtual atomic models that interact under different conditions before being taken to the real world, allowing savings in time and money.

With the goal of potentiate the oil, mining and energy industries, as well as counteract the emission of greenhouse gases, the nanotechnologist Hector Barron Escobar, designs more efficient and profitable nanomaterials.

The Mexican who lives in Australia studies the physical and chemical properties of platinum and palladium, metal with excellent catalytic properties that improve processes in petrochemistry, solar cells and fuel cells, which because of their scarcity have a high and unprofitable price, hence the need to analyze their properties and make them long lasting.

Structured materials that the specialist in nanotechnology designs can be implemented in the petrochemical and automotive industries. In the first, they accelerate reactions in the production of hydrocarbons, and in the second, nanomaterials are placed in catalytic converters of vehicles to transform the pollutants emitted by combustion into less harmful waste.

PhD Barron Escobar, who majored in physics at the National University of Mexico (UNAM), says that this are created by using virtual supercomputers to interact with atomic models under different conditions before being taken to the real world.

Barron recounts how he came to Australia with an invitation of his doctoral advisor, Amanda Partner with whom he analyzed the electronic properties of gold in the United States.

He explains that using computer models in the Virtual Nanoscience Laboratory (VNLab) in Australia, he creates nanoparticles that interact in different environmental conditions such as temperature and pressure. He also analyzes their mechanical and electronic properties, which provide specific information about behavior and gives the best working conditions. Together, these data serve to establish appropriate patterns or trends in a particular application.

The work of the research team serves as a guide for experts from the University of New South Wales in Australia, with which they cooperate, to build nanoparticles with specific functions. “This way we perform virtual experiments, saving time, money and offer the type of material conditions and ideal size for a specific catalytic reaction, which by the traditional way would cost a lot of money trying to find what is the right substance” Barron Escobar comments.

Currently he designs nanomaterials for the mining company Orica, because in this industry explosives need to be controlled in order to avoid damaging the minerals or the environment.

Research is also immersed in the creation of fuel cells, with the use of the catalysts designed by Barron is possible to produce more electricity without polluting.

Additionally, they enhance the effectiveness of catalytic converters in petrochemistry, where these materials help accelerate oxidation processes of hydrogen and carbon, which are present in all chemical reactions when fuel and gasoline are created. “We can identify the ideal particles for improving this type of reactions.”

The nanotechnology specialist also seeks to analyze the catalytic properties of bimetallic materials like titanium, ruthenium and gold, as their reaction according to size, shape and its components.

Escobar Barron chose to study nanomaterials because it is interesting to see how matter at the nano level completely changes its properties: at large scale it has a definite color, but keep another at a nanoscale, besides many applications can be obtained with these metals.