'Corelets' Prime Cognitive Computers

PORTLAND, Ore. — International Business Machines Corp. (IBM Research, San Jose, Calif.) unveiled a new cognitive-computer architecture and programming model inspired by the human brain at the International Joint Conference on Neural Networks (IJCNN, Aug. 4-9, Dallas, Texas). The new "corelet" architecture and programming model aims to evoke cognitive functions from future brain-like computers by amassing a library of algorithms that mimic all the basic functions performed by the neural networks of the human brain.

"What we have done is built a radically new computer architecture -- including silicon and software -- that is inspired by the brain's functions, its low power and its compact size," said Dharmendra Modha, Principal Investigator and Senior Manager at IBM Research (San Jose, Calif.) in an interview with EE Times. "This architecture is non-Von Neumann, modular, parallel, distributed, fault tolerant, event driven, and scalable."

IBM hopes its corelet architecture and programming model will inspire a new generation of smart sensor networks that can perform brain-like cognitive functions including perception, cognition, and actuation. By emulating the brain's functions in silicon circuitry that can run asynchronously, IBM's massively parallel architecture hopes to transcend the sequential operation of today's computer cores with a distributed, highly interconnected, cognitive-computing architecture.

"Our ultimate goal is to create a brain-in-a-box with 100 trillion synapses but consuming only a kiloWatt of power," said Modha, adding:

But there will be many milestones along the way. For instance, if you take the human eye, it processes about one terabit of data per day. If you emulate the eye and visual visual cortex with our architecture and programming model, then make it available in a low-power compact-volume device, it would enable all types of applications in machine vision and robotics, even automotive and consumer smartphones. For instance, a medical application might be light-weight [electronic] glasses that, like a guide dog, help the visually impaired navigate through cluttered rooms by finding the safest path.

IBM's work is part of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project sponsored by the Defense Advanced Research Project Agency (DARPA), which recently injected another $12 million into IBM's collaborative effort with Cornell University and iniLabs Ltd. (a spin-off of the Institute of Neuroinformatics at the University of Zurich and Eidgenössische Technische Hochschule -- ETH -- Zurich, Switzerland) which together have so far received $53 million in four phases to create the cognitive computers of the future.

Interconnected networks of neurosynaptic cores implement the brain's functions with short-range connections and an intra-core synaptic crossbar taking inputs on its dendrites and outputting results to its axons. Information flows from axons to neurons gated by binary synapses. Buffers hold incoming spikes for delayed delivery while the network sends spikes from neurons to axons. SOURCE: IBM

For SyNAPSE, IBM and its collaborators have already demonstrated a cognitive computer chip set that emulates the neurons, synapses, dendrites, and axons in the brain's neural networks using special-purpose silicon circuitry. The hardware architecture combines neurosynaptic cores featuring digital spiking neurons with on-chip, ultra-dense crossbar synapses all driven asynchronously with event-driven communications.

It works by reading sensors with input dendrites which charge up the appropriate neurons until they fire a pulse down their output axon, which connects them to the dendrites of the next layer of neurons. Algorithms then perform pattern recognition on the signals, passing the perceived objects to cognition, which decides on a course of action, then sends signals down axons to the actuators which interface to the real world. In principle, any amount of big-data from sensors could be quickly analyzed by such parallel algorithms running on asynchronous, distributed networks of neurosynaptic cores.

This sounds like the periodic resurgance of interest in neural nets that has been going on for decades. While our brains do well with neural nets, they use a lot of neurons and a lot of synapses to figure out patterns. Once the pattern is understood: "IF the temperature rises quickly THEN ring the fire alarm", it can be programmed on a trivially simple logic gate. I think that neural nets often are used to address problems which are not well understood and the programming of the neural nets is not well understood either. Knowing what we want and finding an efficient way to get to the intended destination is more likely to lead to programming success.

It's great to see experimentaiton in architectures. However these usually come at the cost of needing a whole new programming approach which means outside of the chip devekopers, no one knows how to write software for this beast yet.