Because of this associativity, and the propertiesof distributed representations, similarrepresentations cluster.

Example: Hypercolumns & Retinotopy

Pattern Completion

Networks can complete known patterns on the basis ofpartial information. If several units from a particularwell-known pattern are activated, but a few are not, theactivation reverberates through the network, causing themissing information to be completed in the mannermost consistent with the stimulus information given.

This property also allows them to compensate for noisyinput

•. Graceful Degredation

–If you destroy a piece of the CPU of a computer, it willcrash. Or, if you delete a few lines of code from a program, itwill also crash.

–Brains are not like this.

–Neurons die all the time, don’t radically disrupt functioning.

–Lesions cause focal deficits

Because any given neuron or piece of cortex is only one ofmany players in a representation of cognitive function,damage has limited effects. Deficits increase with increaseddamage, but its *not catastrophic*

Learning

1949 Donald Hebb

Networks can learn by altering the synapticstrength, or the weights of connections betweenunits.

However, prior to the 80s, connection weightshad to be hand-set by trial-and-error to get thenetwork to perform a task.

The trick is to devise a rule that specifies howto adjust the weights as a function of pastperformance so that improvement takes place.

Supervised Learning

A “teacher” provides feedback to a network on itsperformance.

The teacher may be a set of nodes with the correctoutput for a given problem. A network then triesto reach that output given a set of inputs.

An error signal is computed which calculates thedifference between the actual output (teacher) andthe arrived at solution (learner).

Aspects of motor system do this: actual vs.intended output

Backpropogation

An algorithm which assigns “blame” tonodes for the amount of error.

Determines which connection weightscontributed most to the error and sdjuststhem.

Process iterated until no or minimal errorremains.

Reinforcement Learning

Network (learner) is not told the correctoutput, only whether the arrived at solutionis good or bad.

Example: The dopaminergic, rewardsystem.

Unsupervised Learning

Self-Organization

No teacher or reinforcement.

The local, causal dynamics of the networkshape its behavior.

Hebbian Learning

The Power of Learning in NNs

In production systems and engineering, aproblem is solved in advance, thenimplemented.

Operates on a population of artificialchromosomes by selectively reproducingchromosomes of individuals with higherperformance and applying random changes

Applied for many generations until fitnessfunction stops increasing, or a satisfactoryindividual is found

Artificial Chromosome

An artificial chromosome is a string that encodes thecharacteristics of an individual

String may encode the value of a variable of a function thatmust be optimized, may encode connection weights of aneural network, or network architecture with learning rulesfor network development, etc.

Most of the subsequent experiments encode synapticweights, so that in essence, multiple networks are explored

How and what to encode in the chromosome is the subjectof intense research

Fitness Function

A performance criterion that evaluates the performance ofeach individual phenotype. Higher is better.

In terms of fitness function, there is no advantage forrobots that move forward or backward. All robots move indirection of side with more sensors (front) thusmaximizing information to deal with upcoming walls

When the selection criterion changes (either achange in environment or fitness function), someindividuals that previously were not among thebest may be selected for reproduction and pull thepopulation toward a new area of genetic space

Thus, evolving systems are continuously adaptive

Adaptation as displacement of a partiallyconverged population in genetic space (Harvey1992, 1993)

Reactive Intelligence

Sensors and motors are directly linked

Agents react to the same sensory state withthe same motor action

Active Perception

What about cases where a robot must reactdifferently to similar looking sensorypatterns? (Perceptual aliasing problem)

Overcoming Perceptual Aliasing

Agents partially determine the sensorypatterns they receive from the environmentby executing actions that modify theposition of the agent with respect to theexternal environment or by altering theenvironment itself