How neuromorphic ‘brain chips’ will begin the next era in computing

This site may earn affiliate commissions from the links on this page. Terms of use.

IBM recently released new details about the efficiency of its TrueNorth processors, which sport a fundamentally novel design that cribs from the structure of the human brain. Rather than line up billions of digital transistors all in a line, TrueNorth chips have a million computer ‘neurons’ that work in parallel across 256 million inter-neuron connections (‘synapses’). According to these reports, the approach is paying incredible dividends in performance and, more importantly, power efficiency. Make no mistake: Neuromorphic computing is going to change the world, and it’s going to do it more quickly than you might imagine.

The development of neuromorphic computers is thematically pretty similar to the development of digital computers: First figure out the utility of an operation (say, computing firing trajectories during wartime), then develop a crude way of doing it with the tools you already have available (say, rooms full of people doing manual arithmetic), then invent a machine to automate this process in a much more efficient way. Part of the reason a digital computer is more efficient than a human being is its transistors can fire with incredible speed — but so can our neurons. The bigger issue is a digital computer is designed from the ground up to do those sorts of mathematical operations; from a certain perspective, it’s a bit crazy we’ve ever tried to do efficient mathematical work on a computer like the human brain.

Shiny new GPUs may be fast, but they’re also incredibly inefficient when compared with coming neuromorphic competitors.

Similarly, we will eventually look back at the attempt to do learning operations with digital chips, including GPUs, as inherently unwise or even silly. The much more reasonable approach is to design a thinking machine suited to such operations from the most basic hardware level, as naturally predisposed to machine learning as a Celeron chip is to multiplication. This could not only greatly increase the speed of the processor for these tasks, but dramatically reduce the energy consumed to complete each one. That’s what IBM has in the works, and it’s much further along than many expect.

When tasked with classifying images (a well understood machine learning task), a TrueNorth chip can churn through between 1,200 and 2,600 frames every second, and do it while using between 25 and 275 mW. This leads to an effective efficiency of more than 6,000 frames per second per Watt. There’s no listed standard frames/second/Watt figure for conventional GPUs using the same sorting algorithm and dataset. But considering modern graphics cards might draw 200 or even 250 watts all on their own, it’s hard not to imagine a host of low-power, high-performance applications.

Most significantly, there is the incredible expense of modern machine learning. Companies like Apple, Facebook, and Google can only deliver their advanced services by running expensive arrays of super-computers designed to execute machine learning algorithms as efficiently as possible. That specialization comes at a crushing cost. Even leaving that aside, electricity alone becomes a major expense when you’re running that many computers at or near capacity, 24 hours a day. Just ask Bitcoin miners.

IBM’s victory in the game of Go was only possible thanks to advanced machine learning of the sort TrueNorth does natively.

So, early, expensive neuromorphic hardware will likely be a major boon to service providers. We can only hope this will be passed along to consumers in the form of improved performance and wide-ranging savings. But the speed and efficiency offered by neuromorphic chips won’t stop there — reducing power draw by several orders of magnitude will allow such tasks to come out of the cloud entirely.

Want a Babel-fish-like wearable that auto-translates any foreign speech in your vicinity, without the necessity of an always-on internet connection? What about a fitness tracker that knows your every move without ever having to upload that information to a separate computer for analysis? A self-driving car that can go off the grid, or an interplanetary rover that can make unforeseen decisions while out of communication range and running on a tiny nuclear battery?

Neuromorphic chips are currently the most likely way of actually getting such of jobs done. Right now there’s no indication conventional hardware could succeed in its place.

The NS16e.

IBM says it has a new rig called NS16e, an array of 16 TrueNorth processors totaling about four billion synaptic connections — nothing compared with a human brain, but seemingly more than enough to tackle modern machine learning problems. These 16 chips can talk to one another thanks to passive message-passing connections between them, broadly mirroring the function of the corpus callosum that connects the two hemispheres of the brain, though multidirectionally.

But IBM isn’t the only one lunging for this particular finish line. There are the requisite rumors of a Google research project. More notably, Qualcomm has claimed to have neuromorphic capacity in some of its upcoming Snapdragon processors, though it was always a bit unclear how that would work, and there hasn’t been much chatter on that front in recent times. Private investment in this space has been tentative, with most of the progress made at IBM coming thanks to an infusion of cash from DARPA.

Yes, DARPA. After all, soldiers are constantly tromping around areas of the world with poor data coverage and trying to communicate with people who speak truly specific languages. The traditional means of trying to tackle this problem is called Natural Language Processing (NLP), and right now soldiers in the field are doing mostly data retrieval for centralized NLP analysis. With neuromorphic computing available, their translators could begin breaking down a novel dialect right away, improving translation in real time.

Soldiers aren’t the only ones with a need for rugged portability, however. In particular, it seems the quickly oncoming wave of smart eyewear, from Google Glass 2.0 to Snapchat’s social media Spectacles, can only realize its true potential by removing distant data-servers from their workflow. We might imagine a pair of glasses that layer a helpful augmented reality HUD over the world in real-time, highlighting useful elements for you. That sort of functionality will be difficult to roll out for hundreds of millions of electronics consumers if it requires constant, high-throughput data streaming to some suburb of San Francisco.

Elon Musk’s global internet will be cool if it ever materializes, but eve it won’t be fast enough to let everyone stream everything all the time.

The issue isn’t just that such a broad, constant data stream will kill our batteries — though it will — but that performance, cost, and in particular privacy will all be fundamentally improved by doing these complex tasks locally. All things being equal, the only real downside is to the service provider, which can use or sell the unique personal insights it can gain in managing all your personal requests.

Wearable computing, augmented reality, sensory assistance — all these emerging trends require the application of cutting-edge machine learning algorithms. Right now at IBM and elsewhere, we’re seeing the emergence of the technology most likely to let those algorithms spread fast enough and far enough to fully realize all of that potential.

Check out our ExtremeTech Explains series for more in-depth coverage of today’s hottest tech topics.

Tagged In

Nice stuff. I expect to see this as being very useful in self driving cars and body cameras

Cameron Martin

Aside from the embarrassing number of typographical and grammatical errors, decent topic and interesting developments on-the-horizon.

Joel Detrow

These will undoubtedly be used for AI in many areas, but as a gamer, I especially look forward to their use in video games. When the chips become available, the first good game which requires them will truly mark the beginning of a new era. Pretty much any game that uses NPCs, bots, or AI opponents will benefit from this new technology. Text-based games. Visual novels. Tabletop games. Shooters. 4X and RTS.

That’s not even getting into other uses unrelated to games. Digital assistants. Facial recognition and autofocus on cameras. Barcode scanners. Self-driving cars. Language translation, both between languages and from jargon into “plain English”. Augmented reality. General purpose machines which will take over at least 40% of the job market… wait, shit.

thx1138v2

Yeah, there’s that. And don’t forget that every technology ever developed has been weaponized. So what happens when you become dependent on a psychotic AI system or the electrical grid control processors go postal?

The unforeseen consequences could get very, very nasty.

Brandon Liles

Like I said above essentially their proposing sky net

Abdel

I stoped reading when I read “tiny nuclear battery”!

MichaelSB

Wake me up when TrueNorth achieves state of the art results on ImageNet. Until then, not impressed.

Mike

This is a very poor article, full of misinformation and errors – attributing Alpha Go to IBM !!!! And the caption to that picture??? The article must have taken the author all of 5 seconds to research and write … or maybe 3 seconds …

If I’ve understood correctly TrueNorth’s architecture is not digital, so it is .. analog, right? Or are there binaries involved in the programming? It is worlds apart from a Von Neumann architecture, because all memory is fully embedded with logic, at an L1 or even lower (register?) equivalent level, eliminating the Von Neumann bottleneck. It exchanges clock speed and transistor/neuron count for massive, staggering parallelism, much like the human brain.

Each human brain neuron is connected to ~10,000 others, for a total of ~1 quadrillion synapses out of its ~100 billion neurons. TrueNorth’s neurons are each connected to 256 others, a much lower but still impressive complexity, if you consider that an 18-core Intel Xeon’s cores are connected at the core level with just one ring bus. Perhaps the reduced complexity with the human brain can be somewhat mitigated by increased clock speed, since the brain “clocks” up to about 100 Hz.

I really wonder how on Earth do you program (or train?) such a little beastie. Oh, and Graham, you should edit the caption of the AlphaGo picture, since it was Google who beat Go, not IBM.

MichaelSB

No, it’s digital, and the memory is placed in L1 or L2 cache fashion, so you still have to do lots and lots of lookups. If you want something that is more similar to a brain, look at analog architectures, such as memristor or floating gate transistor crossbars, where the computing elements are also the memory elements.

Sweetie

“Each human brain neuron is connected to ~10,000 others, for a total of ~1 quadrillion synapses out of its ~100 billion neurons.”

And yet most people are idiots, even the “smart” ones.

BHesse

I really hope I can get my hands on one of these TrueNorth chips. I have submitted applications to IBM…twice…over the past year or two, and never received a response. Me and my Titan will have to get along without it!

We have submitted multiple applications as well, without even so much as a response. If you want to experiment with neuromorphic chips, the CM1K from General Vision is the only one currently available on the market. I’m not betting too much on the IBM technology — too many other companies are also in the race now (BrainChip, Qualcomm, General Vision) and have products on the market(not to mention they have much better customer service).

Michael

AlphaGo is the AI that won at Go (If you actually looked at the photo, you’d see that’s what the picture is about). AlphaGo has no association to IBM, as it’s a Google company. That is a blatant lie saying it belongs to IBM.

Blake Boschetti

This is a work, m8.

Brandon Liles

Essentially what their proposing is sky net

corujox

Sky Net or Cybor Brain ?

corujox

Be Carefull, Sky Net or Cybor Brain ?

Kory with a K

Yeah, then God returns and says ”Did I put these things in you? Who authorized this crap?” … and yeah, pretty much the current political situation ensues.

Kory with a K

or at least saying ‘brain and chip’ in the same sentence to let the religous trolls go crazy with that stupid combo of literature – just go all out – skynet-capable chips – we’re already selling our souls to silicon for the next 20 years instead of silica. It’s simple, tweak the processor, use it for data mining, and use it as a cash machine – quantum intelligent holographic epistemological aspect of space or temporal brains in a hypothetical aspect – machines hate emotions… and the brain is not an ’emotion creator’ it’s a water filter.

This site may earn affiliate commissions from the links on this page. Terms of use.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

Email

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our
Terms of Use and
Privacy Policy. You may unsubscribe from the newsletter at any time.