For nearly 70 years, computer scientists have depended upon the Von Neumann architecture. The computer that you’re working on right now still uses this paradigm — an electronic digital system driven by processors and consisting of various processing units, including an arithmetic logic unit, a control unit, memory, and input/output mechanisms. These separate units store and process information sequentially, and they use programming languages designed specifically for those architectures.

But the human brain, which most certainly must be a kind of computer, works a lot differently. It’s a massively parallel, massively redundant “computer” capable of generating approximately 1016 processes per second. It’s doubtful that it’s as serialized as the Von Neumann model. Nor is it driven by a proprietary programming language (though, as many cognitive scientists would argue, it’s likely driven by biologically encoded algorithms). Instead, the brain’s neurons and synapses store and process information in a highly distributed, parallel way.

Which is exactly how IBM’s new programming language, called Corelet, works as well. The company disclosed its plans at the the International Joint Conference on Neural Networks held this week in Dallas.

Researchers from IBM are working on a new software front-end for their neuromorphic processor chips. The company is hoping to draw inspiration from its recent successes in “cognitive computing,” a line of R&D that’s best exemplified by Watson, the Jeopardy-playing AI. The new programming language will be necessary because once IBM’s cognitive computers become a reality, they’ll need a completely new one to run them. Many of today’s computers still use programming derived from FORTRAN, a language developed in the 1950s for ENIAC.

The new software runs on a conventional supercomputer, but it simulates the functioning of a massive network of neurosynaptic cores. Each core contains its own network of 256 neurons which function according to a new model in which digital neurons mimic the independent nature of biological neurons. Corelets, the equivalent of “programs,” specify the basic functioning of neurosynaptic cores and can be linked into more complex structures. Each corelet has 256 outputs and inputs, which are used to connect to one another.

“Traditional architecture is very sequential in nature, from memory to processor and back,” explained Dr. Dharmendra Modha in a recent Forbes article. “Our architecture is like a bunch of LEGO blocks with different features. Each corelet has a different function, then you compose them together.”

So, for example, a corelet can detect motion, the shape of an object, or sort images by color. Each corelet would run slowly, but the processing would be in parallel.

IBM has created more than 150 corelets as part of a library that programmers can tap.

Of course, even those hybrid computers won’t be a replacement for the human brain. The IBM chips and architecture may be inspired by the human brain, but they don’t quite operate like it.

“We can’t build a brain,” Dr. Modha told me. “But the world is being populated every day with data. What we want to do is to make sense of that data and extract value from it, while staying true to what can be build on silicon. We believe that we’ve found the best architecture to do that in terms of power, speed and volume to get as close as we can to the brain while remaining feasible.”

Corelet could enable the next generation of intelligent sensor networks that mimic the brain’s abilities for perception, action, and cognition.

According to the researchers, it took the 82,944 processors about 40 minutes to simulate one second of neuronal network activity in real, biological time. And to make it work, some 1.73 billion virtual nerve cells were connected to 10.4 trillion virtual synapses.

Each virtual synapse, which was positioned between excitatory neurons, contained 24 bytes of memory, thus allowing for an accurate mathematical description of the network. The simulation itself was run on open-source NEST software and had about one petabyte of main memory — which is equal to the memory of 250,000 desktop PCs.

The simulation wasn't designed to emulate actual brain activity (the synapses were connected at random) — just its network power. And though massive in scale, the simulated network only represented 1% of the neuronal network in the brain.

“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explained Markus Diesmann through a RIKEN release.

Not sure how he can make such a grandiose claim given that this machine — as impressive as it is — required 40 minutes to just crunch a second's worth of raw brain processing power. And that it represented only 1% the brain's entire network (could you imagine an array 99 times larger than the one featured above!? Though to be sure, Moore's Law will have something to say about the physical size of such arrays by the end of the 2020s.).

Regardless, the researchers say the result will pave the way for combined simulations of the brain and the musculoskeletal system using the K computer. They're obviously hoping that scientists working on various brain mappinginitiatives will latch on to their technology.

In what is the largest and most significant effort to re-create the human brain to date, an international group of researchers has secured $1.6… Read…

Needless to say, matching the computational power of the human brain is one thing; emulating it is something entirely different. Due to its complexity, the human brain likely won't be emulated by a computer until sometime after the 2050s.

George P. Dvorsky serves as Chair of the IEET Board of Directors and also heads our Rights of Non-Human Persons program. He is a Canadian futurist, science writer, and bioethicist. He is a contributing editor at io9 — where he writes about science, culture, and futurism — and producer of the Sentient Developments blog and podcast. He served for two terms at Humanity+ (formerly the World Transhumanist Association). George produces Sentient Developments blog and podcast.
Print •
Email •
permalink •
(2) Comments •
(13094) Hits •
subscribe •
•
•
•
•
TweetCOMMENTS

I think it is generally agreed that you don’t have to have either the human brain’s processing power or it’s inputs to achieve powerful AI. Instead, there are advantages that AI has (i.e. memory, pre-structure, exactitude, focus, duplication, analytical, lack of extraneous processes, etc) that will inevitably help it exceed human capability in many areas.

Predictably, in the medium term we can expect a human/AI team, or an augmented human, to be the most powerful and meet with the most success. After all, we are looking to build tools that will help us build better tools. The Law of Accelerating Returns. If a new computer language, or a new computer processor, helps us get better results, it is just a means to an end of creating even better AI architecture, so eventually we can reach the Singularity.