The Future of AI Is Neuromorphic

As astonishing as computers are, they are far inferior to the human brain. Unlike devices powered by silicon chips, our brains are capable of learning, understanding images, and recognizing speech, all while using very little energy.

But a new approach called neuromorphic computing seeks to leverage the brain’s strengths by using an architecture in which chips act like neurons. The result will be a quantum leap in performance that will revolutionize countless applications.

Companies such as Intel, IBM, and Qualcomm are now involved in a high-stakes race to develop the first neuromorphic computer.

Thanks to Moore’s Law, formulated by Intel co-founder Gordon Moore in 1965, devices using the standard CMOS architecture have become lighter, faster, cheaper, and more powerful roughly every eighteen to twenty-four months for the past five decades.

Today, silicon chips have shrunk to fourteen nanometers, and by the end of the year Intel is expected to release the first ten-nanometer chip. The company is already spending $7 billion to revamp one of its factories in Arizona to make seven-nanometer chips.

But there’s a limit to how small silicon chips can go. According to an article in Wired, the International Technology Roadmap for Semiconductors, which is sponsored by the chip industries in several countries, recently concluded that by 2021 “transistors could get to a point where they could shrink no further.” While it will still be technically possible to make smaller chips, they will reach “the economic minimum” at which the costs will be too high to justify.1

It’s not just that neuromorphic computing provides a way to keep devices on the same price/ performance trajectory they’ve been on even after Moore’s Law expires. The new brain-inspired architecture will enable machines to do things that silicon chips can’t. Traditional chips are good at making precise calculations on any problem that can be expressed in numbers. A neuromorphic system can identify patterns in visual or auditory data, and adjust its predictions based on what it learns.

A research paper by Intel scientist Charles Augustine predicts that neuromorphic chips will be able to handle artificial intelligence tasks such as cognitive computing, adaptive artificial intelligence, sensing data, and associate memory. They will also use 15–300 times less energy than the best CMOS chips use.2

That’s significant because today’s AI services, such as Siri and Alexa, depend on cloud-based computing in order to perform such feats as responding to a spoken question or command...