Project AGI

Building an Artificial General Intelligence

This site has been deprecated. New content can be found at https://agi.io

Thursday, 12 November 2015

New HTM paper - “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex”

The artificial neuron model used by Jeff Hawkins and Subutai Ahmad in their new paper (image reproduced from their paper, and cropped). Their neuron model is inspired by the pyramidal cells found in neocortex layers 2/3 and 5.

They propose that each dendrite is individually a pattern-matching system similar to a traditional artificial neuron: The dendrite has a set of inputs to which it responds, and a transfer function that decides whether enough inputs are observed to "fire" the output (although nonlinear continuous transfer functions are more widely used than binary output).

In the paper, they suggest that a single pyramidal cell has dendrites for recognising feed-forward input (i.e. external data) and other dendrites for feedback input from other cells. The feedback provides contextual input that allows the neuron to "fire" only in specific sequential contexts (i.e. given a particular history of external input).

To produce an output along its axon, the complete neuron needs both an active feed-forward dendrite and an active contextual dendrite; when the neuron fires, it implies that a particular pattern has been observed in a specific historical context.

In the original CLA whitepaper, multiple sequential contexts were embodied by a "column" of cells that shared a proximal dendrite, although they acknowledged that this differed from their understanding of the biology.

The new paper suggests that basket cells provide the inhibitory function that ensures sparse output from a column of pyramidal cells having similar receptive fields. Note that this definition of column differs from the one in the CLA whitepaper!

The other interesting feature of the paper is its explanation of the sparse, distributed sequence memory that arises from a layer of the artificial pyramidal cells with complex, specialised dendrites. This is also a feature of the older CLA whitepaper, but there are some differences.

Hawkins and Ahmad's paper does match the morphology and function of pyramidal cells more accurately than traditional artificial neural networks. Their conceptualisation of a neuron is far more powerful. However, this doesn't mean that it's better to model it this way in silico. What we really need to understand is the computational benefit of modelling these extra details. The new paper claims that their method has the following advantages over traditional ANNs:

We follow Numenta's work because we believe they have a number of good insights into the AGI problem. It's great to see this new theoretical work and to have a solid foundation for future publications.