CPUs of the future: AMD partners with ARM, Intel designs a brain on a chip

Share This article

In the past week, both AMD and Intel have given us a tantalizing peek at their next-generation neuromorphic (brain-like) computer chips. These chips, it is hoped, will provide brain-like performance (i.e. processing power and massive parallelism way beyond current CPUs) while consuming minimal amounts of power.

After announcing last year at its Fusion Developer Summit that its Heterogeneous System Architecture (HSA) would be an open, architecture-agnostic spec that could be implemented by anyone (including Intel), AMD last week announced that its future APUs will feature an ARM Cortex-A5 core to implement TrustZone, ARM Holdings’ security and DRM solution. AMD also announced that has teamed up with ARM, Imagination Technologies, MediaTek, and Texas Instruments to form the HSA Foundation. The idea is that this non-profit consortium will try to coalesce around a single HSA specification, primarily so that developers can create software that makes full use of the various flavors of compute power available to them.

It isn’t too crazy to think that a future AMD (or Texas Instruments) chip might have a few GPU cores, a few x86 CPU cores, and thousands of tiny ARM cores, all working in perfect, parallel, neuromorphic harmony — as long as the software toolchain is good enough that you don’t have to be some kind of autist to use all of those resources efficiently.

Intel’s neuromorphic chip design is very different indeed, involving two rather nascent technologies: multi-input lateral spin valves (LSV) and memristors. LSVs are microscopic magnets that change their magnetism to match the spin of electrons being passed through them (spintronics). Memristors are electronic components that increase their resistance as electricity passes through them one way, and reduce their resistance when electricity flows in the opposite direction — and when no power flows, the memristor remembers its last resistance value (meaning it can store data).

By wiring up LSVs and memristors into a cross-bar switch lattice (pictured above), Intel claims it can build a neuromorphic CPU. The idea seems to be that the LSVs act as neurons, while the memristors act as synapses, with the resistance value equating to the “weight” (importance) of the synaptic link. We’re talking about incredibly small components here (probably tens of nanometers), so in theory Intel might be able to build a chip with billions of neurons and synapses — a far cry from the hundred trillion synapses in the human brain, but then again our brains only have a clock speed (refresh rate?) of around 100Hz. Intel’s neuromorphic chip would presumably operate in the gigahertz or terahertz range.

When we’ve covered brain-like CPUs before, their focus has always been on imitating the massive parallelism of the human brain. Animal brains have another incredible trait, though: They’re ultra-low-power devices. A single human brain is more powerful than the fastest supercomputer on the planet, and yet it consumes just 30 watts.

The Intel researchers posit that their neuromorphic chip can also reach a similar efficiency. Unlike state-of-the-art CMOS transistors that require volts to switch on and off, the LSV neurons only require a handful of electrons to change their orientation, which equates to 20 millivolts. For some applications, Intel thinks its neuromorphic chip could be up to 300 times more energy efficient than the CMOS equivalent.

The one caveat, though, is that this spin-based chip hasn’t actually been built — it’s just a theoretical design that has been simulated on some powerful (conventional!) computers. To my eyes, though, the implementation looks sound — and it can be built using current semiconductor processes, which is handy. Memristors are maturing quickly, and spintronics, because of its ultra-low-power potential, is receiving a lot of attention by research groups all around the world.

As we move from multi-core CPUs to many-core Intel Larrabee/Knights Ferry processors with 50+ cores, heterogeneous AMD Trinity/Kaveri APUs with multiple FPUs and hundreds of individual graphics cores, and the neuromorphic chip detailed here, the decades-old archetypal definition of “CPU” is blurring and morphing into something else entirely. Rather than thinking of a CPU as a collection of transistors, it almost looks like compute cores (or artificial neurons) will become the basic building blocks of processors. Today we’re talking about 2,000 shader cores — in a few years, it might be a few million.

As we’ve covered before, transistor-based silicon chips aren’t going anywhere for a long time yet — but as long as Intel and AMD make it easy enough to program a neuromorphic CPU, I think we’re surprisingly close to the end of simulating massively parallel neural networks on serial hardware and actually building a brain on a chip.

Tagged In

Post a Comment

VirtualMark

Interesting article. I think scientists are quite a way off making the intelligence of even a hamster at the moment, and that’s using powerful supercomputers. From what i remember, they simulate one small part of a brain, and it runs at about 1/10th the speed of real life.

So i don’t really understand what Intel is hoping to accomplish here? What uses and applications will these chips have in the real world? If a supercomputer can’t even begin to get close to a human brain, that means we might have desktops with the intelligence of a slightly dozy ant. I’d be interested to know what software they intend to use these new cpus for.

http://www.mrseb.co.uk/ Sebastian Anthony

IIRC, IBM has now simulated the brain of a mouse on a big supercomputer, or something like that.

The main thing here is that memristors are analog (the resistance value can vary infinitely), and that the system can work in ‘spikes’ (like neurons). I don’t think this particularly solves the massive parallelism thing — it’s still very hard to build a circuit where every neuron is connected to every other neuron. Presumably we’ll get there eventually tho’. (I mean, transistors are already much smaller than neurons… so now we just have to scale it up :)

Mindbreaker

Every neuron is not attached to every other neuron somewhere between 100 and 1000 others. And that does not have to be achieved in a chip they just need to record their state in a memory location and the appropriate 1000 neurons read that location.

http://pulse.yahoo.com/_EF54A3FSF7GMIQ7GIYC5PKNDZE Jeff

The immediate practical applications will involve utilizing very large neural networks for improving computer vision and sound processing (including speech recognition). These types of neuromorphic chips could help us realize our augmented reality future, which will demand very good computer vision, along with the sound processing for user commands and such. Neuromorphic chips have already been shown to be capable of improving both of these problem domains, but until certain engineering problems are overcome, they won’t be commercialized and mass produced.

Hopefully, Intel can do it.

jcollake

I don’t think Intel is aiming for any sort of neural network you might see in software for voice recognition, image recognition, AI, etc.. What they intend to do is more likely related to *optimization*. By re-assigning weights dynamically the CPU can self-optimize for the system on which it is operating. Higher-level functions are for software.

Coffeeman112

Intel designs a brain on a chip or a chip with a brain?:)

Mindbreaker

“our brains only have a clock speed (refresh rate?) of around 100MHz.” Totally untrue, neurons are vastly slower maxing out at about 1khz, and a system can’t go faster than its components. CPUs for instance are generally 5-10 times slower than their transistors because they need a pipeline to complete an instruction. Brains are no different; they need a series of neural impulses to do something.

maria albright

the memristor remembers its last resistance value (meaning it can store data).

Mostly people equate the CPU of a system to that of a human brain. The analogy is quite good and, for this reason, the motherboard should be considered as the central nervous system.

Brad Arnold

“Heterogeneous System Architecture (HSA) would be an open, architecture-agnostic spec…” Different than the now Von Neumann architecture of virtually all computer chips, I wonder if it would run current programs that were recompiled. Certainly, different programming to run optimally on the HSA architecture.

“We’re talking about incredibly small components here (probably tens of nanometers), so in theory Intel might be able to build a chip with billions of neurons and synapses — a far cry from the hundred trillion synapses in the human brain, but then again our brains only have a clock speed (refresh rate?) of around 100Hz. Intel’s neuromorphic chip would presumably operate in the gigahertz or terahertz range.” There are other advantages of chips over human brain processing: memory, ability to copy, scalability, and reductionist components to same a few. Therefore, I predict that you would get superior performance (i.e. AGI) out of a computer process several orders of magnitude smaller than a human brain.

This means that you could theoretically run a AGI on a mobile platform (i.e. a robot) using this (theoretical) chip. Also, it exceeds Moore’s law, so we could see AGI before 2045, when it has been generally predicted to emerge. In other words, the Singularity is coming sooner than we thought.

We’ll soon have to have a debate on the topic of species dominance debate.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.