Last year it was announced that quantum vibrations had been found in microtubules. Microtubules are a hollow structure in the cytoplasm of neurons, the cell substance between the cell membrane and the nucleus. This is extraordinary as such quantum effects are thought to require very cold temperatures and biological systems have been considered to be far to warm for such thing to occur. Further to this, the finding gives some support to a controversial quantum theory of consciousness by Sir Roger Penrose and Stuart Hameroff that is some 20 years old. You can read about it here.

Neurons are the basic functional units in the brain. The conventional view is that they transmit information using electrical signals called action potentials. A neuron has a membrane that serves as a barrier to separate the inside and outside of the cell. The membrane voltage of a neuron is dictated by the difference in electrical potential inside and outside of the cell. Neurons are electrically charged by membrane ion channels that pump ions, which have different electrical charges, across their membranes. Neurons are constantly exchanging ions with the extracellular surroundings in this way. In doing so they can not only maintain resting potential, but also propagate action potentials by depolarising the membrane beyond a critical threshold. Action potentials are transmitted between neurons allowing them to communicate.

This functionality can be encoded in an algorithm, which means that the conventional biological model of the brain can be simulated on a computer. In his books Roger Penrose critiques Artificial Intelligence research by claiming that human understanding is essentially non-algorithmic and therefore non-computational. The argument is derived from the Church-Turing Thesis and Godel’s Incompleteness Theorem, which are considered (by Penrose and others but not all) equivalent to each other.

Penrose’s argument goes something like this: There is an algorithm for deciding if a mathematical proposition is true. This algorithm must be consistent otherwise the decision about the proposition cannot be known correctly. However, according to Church-Turing and Godel the algorithm, if it is consistent, cannot by definition be applied to itself to discover if it is consistent/true. The implication for AI is that either we can not know if something is really true or the method used to ascertain truth cannot be known or validated as correct. Penrose believes that out ability to know mathematical and indeed all truths is unassailable, because such truths particularly mathematical ones are ideal. In turn he suggests that we know our understanding is correct and therefore we know something that cannot be known algorithmically.

This leaves us in a number of positions. Either (A) our understanding is algorithmic but we can never understand how it works, (B) our method of understanding is algorithmic but not consistent, or (C) our understanding is non-algorithmic and therefore requires more than our current conventional biological understanding of the brain. Regarding (A) I believe it is possible to develop complex computational systems for which, due to their innate complexity, the detail of their workings cannot be fully known. However, we are still able to use them to solve problems. Liquid-State Machines are a good example of such methodology currently being employed. Hence, I don’t think it is necessary to fully understand our method of deriving understanding in order to create AI. Regarding (B) I think Penrose’s attachment to the ideality of mathematical truths, i.e. their timeless and absolute truth, makes him feel that the ability to grasp this is somehow special and unassailable. I would regard this as a fallacy. A large part of what brains do is statistical pattern recognition. Our ability to understand fuzzy concepts such as a ‘chair’ may be a similar mechanism to that which is used to understand non-fuzzy things like mathematical truths. The reason the latter is so much more precise is not due to the cognitive systems applied to them but due to the thing itself being so much more precise. Hence, I doubt that our understanding is consistent in a Church-Turing/Godel sense. It is just that we do a damn good job when the subject matter is amenable.

Whilst I think what I have argued for (A) and (B) may discount Penrose’s cognitive requirement for (C), I don’t think that it should all be discarded just yet. Penrose argues that quantum mechanisms are non-algorithmic and super-computational and therefore may if tapped into provide a mechanism for understanding. Although I don’t feel this is necessary, I would agree with Penrose’s critique of strong AI that suggests that consciousness emerges from algorithmic complexity alone. Algorithms can be implemented in many mediums, even using cogs and pulleys. It does seem ridiculous that a system of cogs and pulleys if complex enough would become conscious. Therefore, one may conclude that algorithmic complexity alone is not enough. I would suggest that such complexity if instantiated in a particular medium (e.g. biological brains) give rise to consciousness. However, our current understanding of biology and classical physics does not encompass anything that can explain the phenomena of consciousness. Perhaps an interaction between complex biological systems and quantum mechanics, with all its strange phenomena such as entanglement, may open the door to our understanding of consciousness.