HomeNewsBrain has more than 100 times higher computational capacity than previously thought, say UCLA scientists

Brain has more than 100 times higher computational capacity than previously thought, say UCLA scientists

Dendrites found to generate nearly 10 times more electrochemical spikes than neuron cell bodies

March 10, 2017

Neuron (blue) with dendrites (credit: Shelley Halpain/UC San Diego)

The brain has more than 100 times higher computational capacity than was previously thought, a UCLA team has discovered.

Obsoleting neuroscience textbooks, this finding suggests that our brains are both analog and digital computers and could lead to new approaches for treating neurological disorders and developing brain-like computers, according to the researchers.

Illustration of neuron and dendrites. Dendrites receive electrochemical stimulation (via synapses, not shown here) from neurons (not shown here), and propagate that stimulation to the neuron cell body (soma). A neuron sends electrochemical stimulation via an axon to communicate with other neurons via telodendria (purple, right) at the end of the axon and synapses (not shown here). (credit: Quasar/CC).

Dendrites have been considered simple passive conduits of signals. But by working with animals that were moving around freely, the UCLA team showed that dendrites are in fact electrically active — generating nearly 10 times more spikes than the soma (neuron cell body).

Fundamentally changes our understanding of brain computation

The finding, reported in the March 9 issue of the journal Science, challenges the long-held belief that spikes in the soma are the primary way in which perception, learning and memory formation occur.

“Dendrites make up more than 90 percent of neural tissue,” said UCLA neurophysicist Mayank Mehta, the study’s senior author. “Knowing they are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information.”

“This is a major departure from what neuroscientists have believed for about 60 years,” said Mehta, a UCLA professor of physics and astronomy, of neurology and of neurobiology.

Because the dendrites are nearly 100 times larger in volume than the neuronal centers, Mehta said, the large number of dendritic spikes taking place could mean that the brain has more than 100 times the computational capacity than was previously thought.

Study with moving rats made discovery possible

Previous studies have been limited to stationary rats, because scientists have found that placing electrodes in the dendrites themselves while the animals were moving actually killed those cells. But the UCLA team developed a new technique that involves placing the electrodes near, rather than in, the dendrites.

Using that approach, the scientists measured dendrites’ activity for up to four days in rats that were allowed to move freely within a large maze. Taking measurements from the posterior parietal cortex, the part of the brain that plays a key role in movement planning, the researchers found far more activity in the dendrites than in the somas — approximately five times as many spikes while the rats were sleeping, and up to 10 times as many when they were exploring.

Looking at the soma to understand how the brain works has provided a framework for numerous medical and scientific questions — from diagnosing and treating diseases to how to build computers. But, Mehta said, that framework was based on the understanding that the cell body makes the decisions, and that the process is digital.

“What we found indicates that such decisions are made in the dendrites far more often than in the cell body, and that such computations are not just digital, but also analog,” Mehta said. “Due to technological difficulties, research in brain function has largely focused on the cell body. But we have discovered the secret lives of neurons, especially in the extensive neuronal branches. Our results substantially change our understanding of how neurons compute.”

Neural activity in vivo is primarily measured using extracellular somatic spikes, which provide limited information about neural computation. Hence, it is necessary to record from neuronal dendrites, which generate dendritic action potentials (DAP) and profoundly influence neural computation and plasticity. We measured neocortical sub- and suprathreshold dendritic membrane potential (DMP) from putative distal-most dendrites using tetrodes in freely behaving rats over multiple days with a high degree of stability and sub-millisecond temporal resolution. DAP firing rates were several fold larger than somatic rates. DAP rates were modulated by subthreshold DMP fluctuations which were far larger than DAP amplitude, indicting hybrid, analog-digital coding in the dendrites. Parietal DAP and DMP exhibited egocentric spatial maps comparable to pyramidal neurons. These results have important implications for neural coding and plasticity.

Ray says 10^14-10^16 calculations are needed to get something equivalent to the human brain (The Singularity is Near). Does this mean the range needs to be shifted (10^16-10^18), or is this just a confirmation of the upper value (10^16).

Would be good to know, since this could potentially delay human level machine intelligence a decade?

It’s not so much an issue of speed, it’s mostly a power and data storage issue. Human brain consumes 100 Watt of power. It has ~100 Billion neurons with 10 Trillion dendrites each storing and processing ~10 action potentials.

The best computer performance currently is about 5 GFLOPS / Watt. to get to 10 ExaFLOPS would require 2 Megawatt of power. To match human brain a computer system would have to be 20,000X more efficient.

For the data it would require 100 terabytes of coordinated dynamic memory all reading, writing, and processing in parallel. with the architecture aligned to the same structure as a brain.

The hardware will be the easy part. People overlook the fact that the software to manage all of that is what will need a lot of work. To emulate a human-level concept of consciousness could easily take decades.

As a scientific novice, joeatiyah’s question was the first that came to my mind, as well. But please phrase your response in more layman’s language if you can.

Did Ray always assume that the human brain had this enhanced level of capacity? (10/14 vs. 10/16 calculations per second) And how do these idealized brain capabilities compare to our normal average use of those capabilities?

Being able to say that people have the capability to do much more to “keep up” with the brain enhancements that a computer may offer might go a long way to overcoming objections to such enhancements.

Personally, I’ve enjoyed having read The Singularity when it came out and strutting about these past 10 years knowing that the Singularity was coming and broadcasting that fact to all who would listen.

But now that the literature about it is filling out and there are people in my life actually getting undergraduate degrees in AI, I’m beginning to get terrified of the Singularity again.

I’m afraid of the inequalities sure to come about between those who are enhanced and those who are not and what that will do to our fragile social and political fabrics. I just can’t see comparing this transformation to anything that has come before, as much as anything because it will come so quickly.

Enuf for today. Just add me to the worrier list – feeling out of control over these changes on the horizon to the lives of my children and grandchildren to a very upsetting degree.

Steve: sure. In The Singularity Is Near, starting on p. 122, Ray provides various estimates for the computational capacity of the human brain, and settles for the range of 1014 (100 trillion calculations/sec) to 1016 (10 quadrillion or 10,000 trillion calculations per second)*. That’s a ratio of 102 (16 minus 14) or 100. So if we start with 1014, the increased capacity of 100 described in the paper matches this estimated ratio.

Perhaps delayed a little.
IBM put it at 42 PFlops (10^15)so 100x more = 4×10^18 (4 Exaflops) ~ 2020
regardless, I question whether any human processes information at that level.
Virtually all fMRI images when subject is doing a task have most of the brain being quiet.
Furthermore, there are many tasks that computers can do as well if not better than humans at a fraction of the processing power.

An alternative measure is TEPs – which looks at connections specifically.
see aiimpacts.org for explanation – this may change that calculation as well by a factor of 100x

This. With a little luck and hard work we might be able to nail down a rough idea of the processing power of the brain in the next few decades! Emulation is a pipe dream when we don’t know what we are actually trying to emulate. Singularity in about 70 years for mine. Very much human/AI fusion at that point.

This appears to be a reasonable estimate in view of the fact that the study referenced in the article points to how little is known about how the brain functions and what quantum level processes may be involved in consciousness.
The amazing hubris of people who can concoct stories of human obsolescence and then wonder about the election of a Donald Trump. The best and the brightest flaunt the message that humans will soon become obsolete while most of humanity is concerned about what kind of world we are creating for our grandchildren. Perhaps the analysts have it wrong. Trump is not a result of globalism and the increasingly insecure status of the American worker but rather a result of the intellectual elite creating a world where their grandchildren can have no future. No wonder the Trumpists don’t listen to experts when the experts have no answers to the real problem.