Neural Computing

Aims

The aims of this course are to investigate how biological nervous systems
accomplish the goals of machine intelligence but while using radically
different strategies, architectures, and hardware; and to investigate how
artificial neural systems can be designed that try to emulate some of those
biological principles in the hope of capturing some of their performance.

Lectures

Natural versus artificial substrates of intelligence.
Comparison of the differences between biological and artificial
intelligence in terms of architectures, hardware, and strategies.
Levels of analysis; mechanism and explanation; philosophical issues.
Basic neural network architectures compared with rule-based or
symbolic approaches to learning and problem-solving.

Neurobiological wetware: architecture and function of the brain.
Human brain architecture. Sensation and perception; learning and memory.
What we can learn from neurology of brain trauma; modular organisation
and specialisation of function. Aphasias, agnosias, apraxias.
How stochastic communications media, unreliable and randomly distributed
hardware, very slow and asynchronous clocking, and imprecise connectivity
blueprints, give us unrivalled performance in real-time tasks involving
perception, learning, and motor control.

Neural operators that encode, analyse, and represent image
structure.
How the mammalian visual system, from retina to brain, extracts information
from optical images and sequences of them to make sense of the world.
Description and modelling of neural operators in engineering terms as
filters, coders, compressors, and pattern matchers.

Cognition and evolution. Neuropsychology of face recognition.
The sorts of tasks, primarily social, that shaped the evolution of
human brains. The computational load of social cognition as the
driving factor for the evolution of large brains. How the
degrees-of-freedom within faces and between faces are extracted and
encoded by specialised areas of the brain concerned with the
detection, recognition, and interpretation of faces and facial
expressions. Efforts to simulate these faculties in artificial systems.

Artificial neural networks for pattern recognition.
A brief history of artificial neural networks and some successful
applications. Central concepts of learning from data, and foundations
in probability theory. Regression and classification problems viewed
as non-linear mappings. Analogy with polynomial curve fitting.
General ``linear'' models. The curse of dimensionality, and the
need for adaptive basis functions.

Network models for classification and decision theory.
Probabilistic formulation of classification problems. Prior and
posterior probabilities. Decision theory and minimum misclassification
rate. The distinction between inference and decision. Estimation of
posterior probabilities compared with the use of discriminant functions.
Neural networks as estimators of posterior probabilities.

Objectives

At the end of the course students should

be able to describe key aspects of brain function and neural processing
in terms of computation, architecture, and communication

be able to analyse the viability of distinctions such as
computing versus communicating, signal versus noise, and
algorithm versus hardware, when these dichotomies from Computer
Science are applied to the brain

understand the neurobiological mechanisms of vision well enough to think
of ways to implement them in machine vision

understand basic principles of the design and function of artificial
neural networks that learn from examples and solve problems in classification
and pattern recognition