Everything you need to know about neuromorphic computing

May 23, 20200

In July, a group of artificial intelligence researchers showcased a self-driving bicycle that could navigate around obstacles, follow a person, and respond to voice commands. While the self-driving bike itself was of little use, the AI technology behind it was remarkable. Powering the bicycle was a neuromorphic chip, a special kind of AI computer.

Neuromorphic computing is not new. In fact, it was first proposed in the 1980s. But recent developments in the artificial intelligence industry have renewed interest in neuromorphic computers.

The growing popularity ofdeep learning and neural networkshas spurred a race to develop AI hardware specialized for neural network computations. Among the handful of trends that have emerged in the past few years is neuromorphic computing, which has shown promise because of its similarities to biological and artificial neural networks.

Artificial neurons aren’t of much use alone. But when you stack them up in layers, they can perform remarkable tasks, such as detecting objects in images and transforming voice audio to text. Deep neural networks can contain hundreds of millions of neurons, spread across dozens of layers.

The structure of an artificial neuron, the fundamental component of artificial neural networks (source: Wikipedia)

When training a deep learning algorithm, developers run many examples through the neural network along with the expected result. The AI model adjusts each of the artificial neurons as it reviews more and more data. Gradually it becomes more accurate at the specific tasks it has been designed for, such as detecting cancer in slides or flagging fraudulent bank transactions.

The challenges of running neural networks on traditional hardware

Traditional computers rely on one or several central processing units (CPUs). CPUs pack a lot of power and can perform complex operations at fast speeds. Given the distributed nature of neural networks, running them on classic computers is cumbersome. Their CPUs must emulate millions of artificial neurons through registers and memory locations, and calculate each of them in turn.

Graphics Processing Units (GPUs), the hardware used for games and 3D software, can do a lot of parallel processing and are especially good at performing matrix multiplication, the core operation of neural networks. GPU arrays have proven to be very useful in neural network operations.

The rise in popularity of neural networks and deep learning have been a boon to GPU manufacturers. Graphics hardware company Nvidia has seen its stock price rise in value severalfold in the past few years.

However, GPUs also lack the physical structure of neural networks and must still emulate neurons in software, albeit at a breakneck speed. The dissimilarities between GPUs and neural networks cause a lot of inefficiencies, such as excessive power consumption.

Neuromorphic chips

Contrary to general-purpose processors, neuromorphic chips are physically structured like artificial neural networks. Every neuromorphic chip consists of many small computing units that correspond to an artificial neuron. Contrary to CPUs, the computing units in neuromorphic chips can’t perform a lot of different operations. They have just enough power to perform the mathematical function of a single neuron.

Another essential characteristic of neuromorphic chips is the physical connections between artificial neurons. These connections make neuromorphic chips more like organic brains, which consist of biological neurons and their connections, called synapses. Creating an array of physically connected artificial neurons is what gives neuromorphic computers their real strength.

The structure of neuromorphic computers makes them much more efficient at training and running neural networks. They can run AI models at a faster speed than equivalent CPUs and GPUs while consuming less power. This is important since power consumption is alreadyone of AI’s essential challenges.

The smaller size and low power consumption of neuromorphic computers make them suitable for use cases that require torun AI algorithms at the edgeas opposed to the cloud.

Neuromorphic chips are characterized by the number of neurons they contain. The Tianjic chip, the neuromorphic chip used in the self-driving bike mentioned at the beginning of this article, contained about 40,000 artificial neurons and 10 million synapses in an area of 3.8 square millimeters. Compared to a GPU running an equal number of neurons, Tianjic performed 1.6-100x faster and consume 12-10,000x less power.

AlexNet, a popular image classification network used in many applications, has more than 62 million parameters.OpenAI’s GPT-2 language modelcontains more than one billion parameters.

But the Tianjic chip was more of a proof of concept than a neuromorphic computer purposed for commercial uses. Other companies have already been developing neuromorphic chips ready to be used in different AI applications.

One example is Intel’s Loihi chips and Pohoiki Beach computers. Each Loihi chip contains 131,000 neurons and 130 million synapses. The Pohoiki computer, introduced in July, packs 8.3 million neurons. The Pohoiki delivers 1000x better performance and is 10,000x more energy efficient than equivalent GPUs.

Neuromorphic computing and artificial general intelligence (AGI)

In apaperpublished inNature, the AI researchers who created the Tianjic chip observed that their work could help bring us closer to artificial general intelligence (AGI). AGI is supposed to replicate the capabilities of the human brain.Current AI technologies are narrow: they can solve specific problems and are bad at generalizing their knowledge.

According to Tianjic designers, their AI chip was able to solve multiple problems, including object detection, speech recognition, navigation, and obstacle avoidance, all in a single device.

But while neuromorphic chips might bring us a step closer to emulating the human brain, we still have a long way to go. Artificial general intelligence requires more than bundling several narrow AI models together.

Creating more efficient ANN hardware won’t solve those problems. But perhaps having AI chips that look much more like our brains will open new pathways to understand and create intelligence.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.