Biological Inspiration

Similar presentations

1 Biological InspirationArtificial Neural Network (ANN)loosely based on biological neuronEach unit is simple, but many connected in a complex networkIf enough inputs are receivedNeuron gets “excited”Passes on a signal, or “fires”ANN different to biological:ANN outputs a single valueBiological neuron sends out a complex series of spikesBiological neurons not fully understoodImage from Purves et al., Life: The Science of Biology, 4th Edition, by Sinauer Associates and WH Freeman

2 Neural Net example: ALVINNAutonomous vehicle controlled by Artificial Neural NetworkDrives up to 70mph on public highwaysNote: most images are from the online slides for Tom Mitchell’s book “Machine Learning”

6 The Perceptron student first last year male works hard Lives in hallsFirst this year1Richard2Alan3Alison4Jeff5Gail6SimonNote: example from Alison Cawsey

7 The Perceptron add First last year _ 0.25 0.10 _ Male output _ 0.20Threshold= 0.50.10_hardworkingApply idea in many applications_Lives in hallsFinishedReady to try unseen examples

8 The Perceptron add First last year 0.25 _ Male output 0.20 ThresholdhardworkingLives in halls0.10Threshold= 0.50.20Simple perceptron works ok for this exampleBut sometimes will never find weights that fit everythingIn our example:Important: Getting a first last year, Being hardworkingNot so important: Male, Living in hallsSuppose there was an “exclusive or”Important: (male) OR (live in halls), but not bothCan’t capture this relationship

9 The Perceptron If no weights fit all the examples…Could we find a good approximation? (i.e. won’t be correct 100% of the time)Our current training method looks at output 0 or 1whenever it meets the examples that don’t fit:It will make the weights jump up and downIt will never settle down to a best approximationWhat if we don’t “threshold” the output?Look at how big the error is rather than 0 or 1Can add up the error over all examplesTells you how good current weights are

10 Neural Network Training – Gradient DescentAlternative view of learning:Search for a hypothesis+ Using a heuristic

14 Issues in Multilayer NetworksLandscape will no be so neatMy be multiple local minimaCan use “momentum”Takes you out of minima and across flat surfacesDanger of overfittingFit noiseFit exact details of training examplesCan stop by monitoring separate set of examples (validation set)Tricky to know when to stop

15 Issues in Multilayer NetworksLandscape will no be so neatMy be multiple local minimaCan use “momentum”Takes you out of minima and across flat surfacesDanger of overfittingFit noiseFit exact details of training examplesCan stop by monitoring separate set of examples (validation set)Tricky to know when to stop

16 Example: recognise direction of faceNote: images are from the online slides for Tom Mitchell’s book “Machine Learning”

17 Neural Network ApplicationsParticularly good for pattern recognitionSound recognition – voice, or medicalCharacter recognition (typed or handwritten)Image recognition (e.g. is there a tank?)Robot controlECG pattern – had a heart attack?Application for credit card or mortgageRecommender systemsOther types of Data MiningSpam filteringShape in GoNote: just like searchWhen we take an abstract view of problems, many seemingly different problems can be solved by one techniqueNeural can be applied to tasks that logic could also be applied to

18 What are Neural Networks Good For?When training data is noisy, or inaccurateE.g. camera or microphone inputsVery fast performance once network is trainedCan accept input numbers from sensors directlyHuman doesn’t need to translate world into logicDisadvantages?Need a lot of data – training examplesTraining time could be very longThis is the big problem for large networksNetwork is like a “black box”A human can’t look inside and understand what has been learntLearnt logical rules would be easier to understand

19 Representation in Neural NetworksNeural Networks give us a sort of representationWeights on connections are a sort of representationE.g. consider autonomous vehicleCould represent road, objects, positions in logicComputer learns for itself - comes up with its own weightsIt finds its own representationEspecially in hidden layersWe sayLogical/symbolic representation is “NEAT”Neural Network representation is “SCRUFFY”What’s best?Neural could be good if you’re not sure what representation to use, or how to solve problemNot easy to inspect solution though

20 In the days when Sussman was a novice, an old man once came to him as he sat hacking at the PDP "What are you doing?", asked the old man. "I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied. "Why is the net wired randomly?", asked the old man. "I do not want it to have any preconceptions of how to play", Sussman said The old man then shut his eyes "Why do you close your eyes?" Sussman asked the man. "So that the room will be empty.“ At that moment, Sussman was enlightened.Marvin MinskySlavery of machine and implications