Tag Archive: spikingneural

Last year it was announced that quantum vibrations had been found in microtubules. Microtubules are a hollow structure in the cytoplasm of neurons, the cell substance between the cell membrane and the nucleus. This is extraordinary as such quantum effects are thought to require very cold temperatures and biological systems have been considered to be far to warm for such thing to occur. Further to this, the finding gives some support to a controversial quantum theory of consciousness by Sir Roger Penrose and Stuart Hameroff that is some 20 years old. You can read about it here.

Neurons are the basic functional units in the brain. The conventional view is that they transmit information using electrical signals called action potentials. A neuron has a membrane that serves as a barrier to separate the inside and outside of the cell. The membrane voltage of a neuron is dictated by the difference in electrical potential inside and outside of the cell. Neurons are electrically charged by membrane ion channels that pump ions, which have different electrical charges, across their membranes. Neurons are constantly exchanging ions with the extracellular surroundings in this way. In doing so they can not only maintain resting potential, but also propagate action potentials by depolarising the membrane beyond a critical threshold. Action potentials are transmitted between neurons allowing them to communicate.

This functionality can be encoded in an algorithm, which means that the conventional biological model of the brain can be simulated on a computer. In his books Roger Penrose critiques Artificial Intelligence research by claiming that human understanding is essentially non-algorithmic and therefore non-computational. The argument is derived from the Church-Turing Thesis and Godel’s Incompleteness Theorem, which are considered (by Penrose and others but not all) equivalent to each other.

Penrose’s argument goes something like this: There is an algorithm for deciding if a mathematical proposition is true. This algorithm must be consistent otherwise the decision about the proposition cannot be known correctly. However, according to Church-Turing and Godel the algorithm, if it is consistent, cannot by definition be applied to itself to discover if it is consistent/true. The implication for AI is that either we can not know if something is really true or the method used to ascertain truth cannot be known or validated as correct. Penrose believes that out ability to know mathematical and indeed all truths is unassailable, because such truths particularly mathematical ones are ideal. In turn he suggests that we know our understanding is correct and therefore we know something that cannot be known algorithmically.

This leaves us in a number of positions. Either (A) our understanding is algorithmic but we can never understand how it works, (B) our method of understanding is algorithmic but not consistent, or (C) our understanding is non-algorithmic and therefore requires more than our current conventional biological understanding of the brain. Regarding (A) I believe it is possible to develop complex computational systems for which, due to their innate complexity, the detail of their workings cannot be fully known. However, we are still able to use them to solve problems. Liquid-State Machines are a good example of such methodology currently being employed. Hence, I don’t think it is necessary to fully understand our method of deriving understanding in order to create AI. Regarding (B) I think Penrose’s attachment to the ideality of mathematical truths, i.e. their timeless and absolute truth, makes him feel that the ability to grasp this is somehow special and unassailable. I would regard this as a fallacy. A large part of what brains do is statistical pattern recognition. Our ability to understand fuzzy concepts such as a ‘chair’ may be a similar mechanism to that which is used to understand non-fuzzy things like mathematical truths. The reason the latter is so much more precise is not due to the cognitive systems applied to them but due to the thing itself being so much more precise. Hence, I doubt that our understanding is consistent in a Church-Turing/Godel sense. It is just that we do a damn good job when the subject matter is amenable.

Whilst I think what I have argued for (A) and (B) may discount Penrose’s cognitive requirement for (C), I don’t think that it should all be discarded just yet. Penrose argues that quantum mechanisms are non-algorithmic and super-computational and therefore may if tapped into provide a mechanism for understanding. Although I don’t feel this is necessary, I would agree with Penrose’s critique of strong AI that suggests that consciousness emerges from algorithmic complexity alone. Algorithms can be implemented in many mediums, even using cogs and pulleys. It does seem ridiculous that a system of cogs and pulleys if complex enough would become conscious. Therefore, one may conclude that algorithmic complexity alone is not enough. I would suggest that such complexity if instantiated in a particular medium (e.g. biological brains) give rise to consciousness. However, our current understanding of biology and classical physics does not encompass anything that can explain the phenomena of consciousness. Perhaps an interaction between complex biological systems and quantum mechanics, with all its strange phenomena such as entanglement, may open the door to our understanding of consciousness.

The visionary inventor, technologist, and futurist Ray Kurzweil now works at Google. He is heading a research team trying to crack artificial general intelligence seemingly by building a model of the neocortex. Google offers him resources to do this which are unmatched. Kurzweil recently gave a talk at Google I/O about his thoughts on the matter which you can watch here.

Rays talk doen’t give away much about the tech he is developing so I thought I’d buy his most recent book How To Create A Mind and see how he intends to achieve his goal. Ray did early and ground breaking work in text and speech recognition and the technologies he developed are used these days in things such as Apples SIRI. Rays states in his book that the neocortex is a vast hierarchical pattern recognition system and I don’t think anyone would contradict this. Further to this he sites communication with Henry Markham whos work has shown that there are repeated patterns of connectivity in the brain with assemblies of 12 or so neurons connected in a Lego like way. These assemblies are presumed to be individual pattern recognisers, and a hierarchical system of these would support hierarchies of feature detectors leading all the way to abstract concepts. However, I find the implicit belief that all the brain does is pattern recognition rather naive. Questions such as: how does pattern recognition relate to thought processes, behaviour or creativity? are not even approached.

Whist his book continues to discuss philosophical matters such as ‘what is the similarity between a computer and a brain?’ ,’do we have free will?’, and ‘what is the notion of identity?’, not only does he only cover old ground here but he also does not in any way relate them to his design thesis. His main and concluding thoughts seem to be with regard to the Law of Accelerating Returns, which states that once something becomes an information technology then it is subject to exponential price/performance enhancement. The problem Ray is faced with is that the mechanics of thought are presently unknown and therefore not as yet an information technology and so are not subject to this law. Unfortunately, Ray has not really suggested much on how we might tackle understanding and emulating the mechanics of thought, or perhaps he does know but Google want him to keep it under wraps.

The FET Flagship Programme is a new initiative launched by the European Commission and has awarded one billion euros, over ten years to the Human Brain Project. The leader of the project, Henry Markram, a professor of neuroscience at the Ecole Polytechnique Federale of Lausanne in Switzerland, said earlier this month that it could not be undertaken without this kind of funding.

The project will simulate ‘everything we know about the human brain’ in supercomputers. The human brain has approximately 80 billion neurons each with 10000 synaptic inputs. As I am know doubt you are aware, it is not the number of neurons that will make it work, the key to the brains cognitive power is how they are wired together, and we only know so much about that.

The project website states:

“The brain, with its billions of interconnected neurons, is without any doubt the most complex organ in the body and it will be a long time before we understand all its mysteries. The Human Brain Project proposes a completely new approach. The project is integrating everything we know about the brain into computer models and using these models to simulate the actual working of the brain. Ultimately, it will attempt to simulate the complete human brain. The models built by the project will cover all the different levels of brain organisation – from individual neurons through to the complete cortex. The goal is to bring about a revolution in neuroscience and medicine and to derive new information technologies directly from the architecture of the brain.”

Synchrony is to do with oscillations in the brain. I posted about what oscillations are here. To recap, a population of neurons that repeatedly, fire together (burst) and then go quiet and then fire together again, are said to oscillate. The speed of the oscillation is called the frequency and is measured in Hertz (Hz). When two distinct populations oscillate at different frequencies they are desynchronised, but when they both oscillate at the same frequency they are said to be synchronous.

Synchrony is a different concept to resonance. Resonance is where one thing is oscillating and a second thing is not, but then the second starts to oscillate at the same frequency as the first. Resonance is therefore when one thing oscillates in sympathy with another. The reason the second oscillates in sympathy is due to some connection between the two. For example, one object oscillating may be causally linked to another by the gas in our atmosphere and these vibrations my effect the second so it too starts oscillating.

Synchrony is also caused by a connection between two objects. Unlike resonance, where one object is originally oscillating and one is not, with synchrony both objects are originally oscillating. The key is that the frequencies at which each is originally oscillating are different. When they synchronise they may synchronise to a frequency that is different from either of the original frequencies. So for example, you may have two pendulums connected together by the beam they are both hung upon. One may be swinging at 20 Hz and the other at 40 Hz. The beam connecting the two creates a causal interaction. After a while and much interaction both may end up oscillating at 30 Hz. Both are synchronised to the same frequency, but at a different frequency than either was at originally. The reason they may have ended up at a different frequency is that the causal interaction is going both ways. The oscillation from one is effecting the oscillation of the other, and vice versa. With resonance the causal effect is one way, hence the second object oscillating in sympathy at the frequency of the first.

Now back to synchrony in the brain. In the brain you may have one population of neurons oscillating at one frequency and another population oscillating at another frequency. The neurons in one population are causally connected to the other by synapses, and vice versa. Over time the oscillation in each synchronises so that the bursts of firing in each population are at the same frequency. This can be likened two two people hitting a drum at the same tempo. Not only that but also they may have same phase. The phase refers to when the beats happen. Imagine two people drumming at the same tempo. Even though each is at the same tempo one person may hit the drum when the other person is quiet, and vice versa. If this is so they are said to be completely out of phase. If they hit the drum at the same time and therefore quiet moments also happen at the same time they are said to be in phase. Similarly if the two neural populations are synchronous and the burst of firing and moments of quiet occur at the same time then they are in phase.

Gamma-band oscillations (a population of neurons firing together at the rate of 30-80 Hz) can emerge in a population of excitatory and inhibitory neurons. The inhibition causes the moments of quiet in the oscillation. This provides windows for interaction at the moment inhibition wears off and there is a burst of firing. Excitatory signals from a different oscillating population can then take advantage of this because gamma band oscillations are sufficiently regular to allow prediction of the next burst. As long as the travelling time from the sending to the receiving group is also reliable, their communication windows for input and output are open at the same times (i.e. when the bursts occur). Packages of spiking signals from one population of neurons can therefore arrive at the other neuronal group in precise synchronization and enhance their impact. In short, synchronisation between two populations allows two populations to work together and provides the optimal conditions for transferring information. Pascal Fries discusses the mechanistic consequences of neuronal oscillations and calls this hypothesis ‘communication through coherence’. You can read a more technical report by him here.

I am going to write a few post on basic concepts in brain science. This first one is about oscillations.

A group of neurons that are close together is referred to as a population or cluster. A population will have a specific role, e.g. responding to a particular stimulus such as for example a cat.

When the neurons in a population fire at roughly the same time, then go quiet, and then fire again and repeat this process this is called an oscillation. The time when they fire is called a burst of firing. The number of bursts in a second is the frequency of the oscillation. A frequency of 1 Hertz or for short ‘Hz’ is 1 oscillation a second, which means that there will be one burst of firings and one period of silence. 10 Hz is 10 oscillations a second, 50 Hz is 50 oscillations a second etc.

Different names are given to different ranges of the frequency (Hz) of the oscillation (also called rhythms). The delta band rhythm ranges from 0.1−3.5 Hz. Theta rhythm ranges from 4−7.5 Hz. Alpha band is 8−13 Hz. Beta is 14−30 Hz, and gamma is 30-80 Hz.

The amplitude or power of the oscillation/rhythm is dictated by the number of neurons in a population that fire during a burst. If there is a population of 200 neurons and 10 fire in the burst that will have a lower power than if 150 neurons fire. 200 neurons firing during the burst in a population of 200 neurons will have the maximum possible amplitude/power.

The various rhythms have diverse associations. Thalamocortical networks display increased delta band power during deep sleep. Theta activity is increased during memory encoding and retrieval. Alpha band changes are associated with attentional demands. Beta oscillations have been related to the sensorimotor system. Of all the frequency bands the role of gamma is thought to be most extensive and is hypothesized to provide a mechanism that underlies many cognitive functions such as: attention, associative learning, working memory, the formation of episodic memory, visual perception, and sensory selection.

So for example, a population that responds to a cat with a gamma oscillation of very high power may indicate that you attending to a very strong visual perception of a cat.

Having studied western philosophy at Uni, I had a lot of respect for Buddhism when I read about it. The majority of what it states is very sound and in line with modern thinking. In fact Buddhism is in many ways 2500 years ahead of western thought which is quite incredible.

The basic underlying principle of the way Buddhism sees the world is about causality and impermanence. This can be interpreted in line with modern materialism and dynamical systems theory. Put simply we can view the world as a dynamical system with a set of variables, say for example atoms. Some of these come together to form a thing. However, the things that are formed are impermanent. For example all the atoms in a persons body change many times over their life time until eventually it completely disintergrates. This is an example of impermanence. What we believe is a thing is really the presence of form over time not stuff such as matter. The dynamical system also by definition exhibits causality.

Buddhism defines the self as having five aspects: matter, sensation, perception (i.e. pattern recognition/concepts), mind (i.e. thoughts), and consciousness. These aspects should not naively be viewed as components that plug in together but rather qualities on the self with which there may be overlap. It’s pretty hard to define such a thing as self and the Buddha made a pretty good job here I think. The self in Buddhism doesn’t exist but is illusionary due to the fact there are no real things just perceived form over time.

The self is dukkha which is often inappropriately translated as suffering but is better described as thirst, the continual striving and wanting to hold on to things. Dukkha therefore makes life unsatisfactory as we cant hold on to stuff because it is impermanent.

Here is where is starts to get sketchy. In order to get rid of the bad effects of dukkha one should let go of holding on to things, conceptualisation, and of the self. Now although there is no objective reason to say that a life through intellectual conceptualisation is more right or wrong than a life without, we may concede that following a Buddhist approach may lead to a more content life. Where it now turns into mumbo jumbo is that they say that the there is a causal effect of having a self focused life which continues after death. Although original Buddhism doesn’t say anything about reincarnation, this causal effect is not simply say a social effect that you have had on people continuing, but rather an actual physical causal effect and it is the negative effect of causing more dukkha.

The method of achieving Buddhist goals is through lifestyle and meditation. Through this one reaches a nirvanic state where conceptualisation and self do not appear. One might argue that the seeming attainment of nirvana through meditation may be illusionary and just a high state similar to taking psychotropic drugs. However, neurological studies using brain scans on people who practice meditation have shown amazing results, such as gamma power 30 times hight than normal, heighten activity in brain areas that have responses to things being meditated on etc. So it may be possible to control your brain to such a degree that you suppress lots of conceptual features.

Jill Bolte Taylor is a neuroanatomist who suffered from a stroke. Whilst this was happening she was aware of certain high level brain functions shutting down (such as language and internal monologue). She found herself in a nirvana kind of state. You can watch a TED video where she talks about her experience here.

The coolest toy I’ve seen lately. Webots is a robot simulator that runs using the ODE physics engine. There are lots of real robot models included such as Kheperas and even the iCub. You can also build your own robots to simulate. Robot controllers can be coded in lots of languages and you can even link directly to matlab. How cool is that!! Only costs £215/$342 for the normal version.

I havent posted on this blog for almost 2 years. I have been snowed under with the PhD and to be frank couldn’t think about anything else. I am going to try to start posting again. As a sign of good will I have set up a nice new look. Hope you like it 🙂

A localised group of neurons firing synchronously at 30-100 hz is referred to as a local field potential gamma oscillation. These are important for spike-timing-dependent plasticity to occur. Synchronized activity of 10–30 ms in the gamma frequency create a narrow time window for the coincident activation of pre-synaptic and post-synaptic cell used for STDP (for more details read here). Slower oscillations do not provide a narrow enough window and faster oscillations, having more than one cycle in the STDP window, cause the post-synaptic cell to receive inputs both before and after having generated a spike.

However, STDP occurs if pre-synaptic and post-synaptic action potentials are correlated. Notably this occurs even if two cells with equally weak inputs correlate, which is not the kind of result that is useful to learning as we wish to learn strong coincidences. Gamma synchronization is not necessarily time-locked to a stimulus. Due to these two reasons long term potentiation (strengthening) of synapses induced by synchronized gamma activity alone does not attain the specificity of memory encoding, but an additional mechanism is required.

The hippocampus is considered to play a major role in memory. Learning-dependent synchronization of hippocampal theta activity is associated with large event-related potentials with frequency in the theta (4-8 hz) and delta (0-4 hz) range that appear to result from phase the reset of theta activity occurring at a fixed interval after presentation of a stimulus. Theta reset determines the theta phase at which a given stimulus affects a cell. Theta band learning is non-Hebbian and only involves pre-synaptic and not post-synaptic spikes. If the stimuli arrive during the peak of the theta oscillation long term potentiation (strengthening of synapses) occurs, inputs arriving at a trough of the theta cycle induce long term depression (weakening of synapses). Axmacher et al note that a combination between theta and gamma learning dynamics my provide the required specificity for memory learning:

‘Whereas gamma-dependent plasticity alone may not distinguish between correlated weak and strong inputs and occurs not necessarily time-locked to a given stimulus, plasticity during theta reset has these features. Theta-dependent plasticity alone, on the other hand, is too coarse to encode stimulus features with a high temporal resolution: at least Hebbian LTP requires precise spike timing. Moreover, sequence encoding (sequences of items as well as spatial paths) has been suggested to depend on action potentials during subsequent theta phases, with gamma periods binding each item.’