IBM creates Corelet programming language to make software that operates like the human brain

This site may earn affiliate commissions from the links on this page. Terms of use.

At the International Joint Conference on Neural Networks held this week in Dallas, researchers from IBM have taken the wraps off a new software front-end for its neuromorphic processor chips. The ultimate goal of these most recent efforts is to recast Watson-style cognitive computing, and its recent successes, into a decidedly more efficient architecture inspired by the brain. As we shall see, the researchers have their work cut out for them — building something that on the surface looks like the brain is a lot different from building something that acts like the brain.

Head researcher of IBM’s Cognitive Computing group, Dharmendra Modha, announced last November that his group had simulated over 500 billion neurons using the Blue Gene/Sequoia supercomputer at the Lawrence Livermore National Laboratory (LLNL). His claims, however, continue to draw criticism from others who say that the representation of these neurons is too simplistic. In other words, the model neurons generate spikes like real neurons, but the underlying activity that creates those spikes is not modeled in sufficient detail, nor are the details of connections between them.

To interact with IBM’s “True North” neural architectural simulator, the researchers have developed an object-oriented language they call Corelet. Building blocks, or corelets, can be built using 256-neuron neuromorphic CPUs designed to do specific tasks. The “True North” library already has some 150 pre-designed corelets to do things like detect motion or image features, or even learn to play games. To play pong, for example, a layer of input neurons would be given information about the “ball” and “paddle” motions, an output layer would send paddle motion updates, and intermediate layers would perform some indeterminate processing.

The problem with assigning specific functional tasks to specific cores is that a further rift with real brains is introduced — a rift even beyond the simplicity of the models of individual neurons. Real neural networks don’t just do one thing, but many simultaneously. I think that if the researchers were seriously attempting to capture particular functions of real brains they would not be building complex million- or billion-neuron systems that look like the image above. Instead, they would be building rather more specific systems composed of just a handful a richly modeled neurons that mimic actual functions of real nervous systems — like, for example, the spinal reflex circuit:

Like a pong controller, a simple network such as this would have inputs, outputs, and intermediate neurons, but unlike pong, the spiking capability and activity would bear traceable relevance to the task at hand. Systems of neurons built on top of a circuit, like a reflex arc, could be added later — but without the underlying relevance to the real world, not only are they meaningless, but impossible to comprehend. If, however, researchers insist on jumping right away to massive neuron count models, perhaps we might suggest a thought experiment to probe how arbitrary networks might be functionally organized.

If an individual neuron is going to generate meaningful spikes, the consensus is that the neuron needs to have some minimum level of complexity. For the thought experiment then, let a neuron be represented by a whole person, and the spike of the neuron be the clap of the person. When assembled into a room, we know from general experience that a large group of clapping human neurons can quickly evolve synchronized clapping from initially random applause within a few seconds — no big deal. We might imagine the crowd clappers could also quickly provide an answer to the question 2+2, by similarly organizing beats of 4. The magic, and relevance, for designing network chips comes in when you begin to add the specializations of input and output.

IBM Watson: Now IBM wants to produce a system that derives its intelligence from thinking, rather than merely searching through vast amounts of data.

Instead of presenting the simple 2+2 query to the whole network, we can present it to just a few input units, who transmit the message in whatever way they see fit. Simultaneously, different queries can be presented to other input units. The output units then can be instructed to listen for messages and transmit outputs in the way that they see fit. The key addition we would require here is than the intermediate human units can move about some limited space to better hear activity of their particular choosing. Finally, we would need to add some driving energetic force to incentivize any behavior in the first place, and also limit the amount of claps or spikes they can produce. An example of this organizing incentivze might be jelly beans that are sprinkled onto the hungry crowd as it moves about.

If the amount of clapping an individual can perform is then directly confined by the amount of jelly bean energy each unit can accrue, the energy-incentivation loop is closed and we have all the essentials for a neural computing system. If instead of trying to model extremely complex neurons in an attempt to capture and comprehend network behaviors, we simply create the real network just described and record its behavior for observation, I would offer that greater understanding into network dynamics relevant to real brains will have been gained, than any attempt using billions of simple processing elements.

When we realize that each individual neuron, each cell, bears inside the full survival instinct and repertoire that enabled its amoeba-like forebear to thrive and reproduce on its own in a hostile world, we have some appreciation for the repurposed power possessed in each one. Ignoring the complexity of individual neurons beyond simple electrical behavior is folly if we desire to build computing systems with the power of the brain.

Tagged In

So when does Dharma’s, er Dharmendra’s group think they can create an AI that will pass the Touring Test?

Guest

Can they make a programming language that helps that guy look human?

Moshan

You’re such a tough guy.

franzius

You lost me when you started injecting your opinion in the article. So this whole piece is about you and the IBM research is just a backdrop to your ‘genius’?

franzius

You lost me when you started injecting your opinion in the article. So this whole piece is about you and the IBM research is just a backdrop to your ‘genius’?

Dozerman

So, at least now we’ll know what language Skynet’s software will be programmed in.

zapper

This guy already looks threatening to me as if asking me to handover my biological brain in return for a microprocessor based one to be implanted in my now empty skull (so that govt controls what I think , from now on, and also give me reward of pain or pleasure remotely from NSA – remotely piloted ^human^ being )

Alice Bressanoni

nice ass face

gautam sharma

is it really true can’t beleive

John Michael Michaels

most yes

gautam sharma

is it really true can’t beleive

sidd

all i can think is ghost in the shell…

This site may earn affiliate commissions from the links on this page. Terms of use.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

Email

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our
Terms of Use and
Privacy Policy. You may unsubscribe from the newsletter at any time.