A new computer chip prototype called a “memristor” could process images and video much faster and using much less power than today’s most advanced chips using a processing system similar to the one used by the human brain.

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, professor of electrical engineering and computer science at the University of Michigan and lead author of a paper on the work in Nature Nanotechnology.

Lu’s next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and colleagues demonstrate an algorithm that relies on a technique called “sparse coding” to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

Memristor chip. (Credit: U. Michigan)

Memristors are electrical resistors with memory—advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.

“The tasks we ask of today’s computers have grown in complexity,” Lu says. “In this ‘big data’ era, computers require costly, constant, and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive, and power-hungry.”

But like neural networks in a biological brain, networks of memristors can perform many operations at the same time, without having to move data around. As a result, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning.

Memristors are good candidates for deep neural networks, a branch of machine learning which trains computers to execute processes without being explicitly programmed to do so.

“We need our next-generation electronics to be able to quickly process complex data in a dynamic environment. You can’t just write a program to do that. Sometimes you don’t even have a pre-defined task,” Lu says. “To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience.”

Picture a chair in your mind

A mammal’s brain is able to generate sweeping, split-second impressions of what the eyes take in. One reason is because they can quickly recognize different arrangements of shapes. Humans do this using only a limited number of neurons that become active, Lu says. Both neuroscientists and computer scientists call the process “sparse coding.”

“When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair,” Lu says. “Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified—when ‘stored’ in the appropriate category in our heads.”

Similarly, Lu’s electronic system is designed to detect the patterns very efficiently—and to use as few features as possible to describe the original input.

In our brains, different neurons recognize different patterns, Lu says.

“When we see an image, the neurons that recognize it will become more active,” he says. “The neurons will also compete with each other to naturally create an efficient representation. We’re implementing this approach in our electronic system.”

Training the network

The researchers trained their system to learn a “dictionary” of images. Trained on a set of grayscale image patterns, their memristor network was able to reconstruct images of famous paintings and photos and other test patterns.

If their system can be scaled up, they expect to be able to process and analyze video in real time in a compact system that can be directly integrated with sensors or cameras.

Other collaborators are from the University of Michigan, the Los Alamos National Lab, and Portland State University.

The work is part of an “Unconventional Processing of Signals for Intelligent Data Exploitation” project that aims to build a computer chip based on self-organizing, adaptive neural networks. The Defense Advanced Research Projects Agency funds the project.