These myths are often repeated claims that have been made about connectionist systems, which when closely scrutinized, fail to be adequately justified, or properly qualified. In some instances, such claims are simply false.

The myths that he seeks to refute are:

Connectionist systems are in some sense ‘neural’ or ‘brain-like’ – We have heard often a likening of connectivist systems to be brain-like or with brain like properties. He references Churchland (1989: p. 160) who introduces connectionist networks as follows:

The networks to be explored attempt to simulate natural neurons with artificial units…Each unit receives input signals from other units via “synaptic” connections…the “axonal” end branches from other units all make connections directly to the “cell body” of the receiving unit.

Talking about Rumelhart’s claim about the close similarity of a connectionist processing unit to an abstract neuron, Berkeley states there is nothing like an abstract neuron. There are many types of neurons in the brain, so it is valid to ask which type of neuron are we abstracting the type from since the features and functional properties of one kind of neuron may not apply to the entire class of neurons. Corresponding to this argument, there cannot be homogeneity in the processing units typical of the connectionist architecture. So also for the “bias” term or connection weights which are part of connectionist models – there seems to be no evidence that “threshold membrane potentials in biological systems can be modified in any analogous way”.

Similarly, talking about connections, a visible difference in biological systems is that dendrites (the signal receivers) and axons (the signal transmitters) are part of the neuron, not distinct from it as in connectionist models. Not only that, connectionist structures are massively parallel (every node is connected to every other node in prior and subsequent layers of the network), but Churchland notes the fact that cortical neurons are rather sparsely connected; not everything is connected to everything else.

Connectionist Systems Are Consistent With Real Time Constraints Upon Processing – Connectionists believe that their algorthms must have considerable parallelism because the brain has slow components, but many of them – “neurons operate in the time scale of milliseconds, whereas computer components operate at the time scale of nano-seconds” (the 100-step argument). According to Berkeley, this argument ignores sub-neuronal activities (e.g. at the level of the synaptic cleft) and the “many chemical processes of the dendrites which take place over a wide range of time scales”. The argument also leads on the basis of an over-simplification – that neurons operate at the scale of milliseconds – which is untenable because of a “variety of different intrinsic firing patterns and rates” of cortical neurons and “three distinct types of nerve fiber which have differential rates of signal conductance”.

Connectionist Systems Exhibit Graceful Degradation – We are able to make sense of imperfect inputs such as the distorted digit on a scoreboard. Connectionist systems claim to be able to recognize patterns that non-connectionist systems cannot. This overlooks research in non-connectionist systems that is able to deal with degradation.

Connectionist Systems Are Good Generalizers – “As a rough first approximation, a system can be said to generalize when it can produce outputs which are appropriate for a particular input or class of inputs, which it has not been previously given information about”. Generalization may not be considered a fixed property of connectionist systems due to the fact that “even with identical network architectures, training regimes and similar starting parameters, different versions of the same network will exhibit different degrees of generalization”.

I am not the expert here, but reading this article has made me think a bit about the proximity of connective knowledge and connectivism to neuroscience.