SYS635 Adaptive Control Systems Ka C. CheokMRAS Gradient Method 0th 1st 2nd Order Systems 22 Oct ‘051MODEL REFERENCE ADAPTIVE SYSTEM (MRAS) MRAS Objective: Based on information ym, y, u and uc, devise a controller that automatically adjusts itself so that the behavior of the closed-loop control plant output (y) closely follows that (ym) of the reference model. In other words, make y mimic ym.Illustration: Can you think of an example for such a system? Dance choreographer? Coach?5.2 The MIT Rule (read Chap 5.2)Example of a cost function21( )( )where is a function of 2Jθε θεθ=MIT Rulesays that the time rate of change of θis proportional to negative gradient of Jw.r.t. θ. That is ddJddtddθεγγεθθ= −= −Integrating yields the continuous-time equation for updating θ: 0000( )()()ttttddJttdttdtdtdθθθθγθ=+=−∫∫Since we can dtdtθθ∆≈∆, this leads to the Delta Rule that says the increment (delta) in θis approximately ()dJtdθγθ∆≈ −∆and the discrete-time equation for updating θis newoldθθθ=+ ∆Water surface He/she is looking for the deepest spot in the river, should he/she go forward or backward? What logic did he used? This may be a vector although drawn as a scalar 0Jθ∂<∂0Jθ∂>∂( )JθθyuucymReference Model (in computer/or a physical system) Controller Physical PlantAdjustment Mechanism

This
preview
has intentionally blurred sections.
Sign up to view the full version.

SYS635 Adaptive Control Systems Ka C. CheokMRAS Gradient Method 0th 1st 2nd Order Systems 22 Oct ‘052()()()()()()()()()1221Sigmoid function = 111y1Derivative ( 1) 111111111111hhhhhhhhhhhhyeeeeeehhheey yeeee−−−−−−−−−−−−−−+∂∂+∂+=== −+−=∂∂∂+==−=−++++Application of MIT Rule to a single neuron modelLet’s suppose we are shown a input-output pattern or phenomenon in the form of ymversus xcurve. In the following example, it so happens that the pattern behaves as shown in the figure.We introduce a neuron model whose math description tends to produce an I/O relationship similar to the given pattern. That single neuron math model is described by a sigmoidal funtion 11hyhwxbe−==++where wis called a weight and bis a bias. The error between the pattern and the neuron model is given by ()1111mmmhwxbyyyyeeε−−+=−=−=−++We would like to find wand bso that yfollowsmy. A way to do this is to find wand bso that they minimize the error cost function ()21,2Jw bε=The gradients of J w.r.t. w and b are: ()()()( )()()211()211(1)(1)hmyywxbJJyhey yxy yxwyhwyhwεεεεεε−∂∂∂−∂+∂∂∂∂∂+===−=−∂∂∂∂∂∂∂∂∂()()()( )()( )211()211(1)1(1)hmyywxbJJyhey y

This is the end of the preview.
Sign up
to
access the rest of the document.