+=Ttt t B B Maximize1, 2 , 1Solution for DP problems Bellmans Principle: An optimal policy (a set of decisions) has the property that whatever the initial state and initial decisions are, the remaining decisions must constitute an optimal policy with respect to the state resulting from the initial decision. State Si of the system at a stage i, one must proceed optimally till the last stage, irrespective of how one arrived at the stage Si. Solution for DP problems In serial multistage decision making problems, the stage numbers may be assigned in increasing order either in the backward or forward direction In both methods, we must define the state variable, state transformation equation and the recursive relation in every stage very preciselyBackward RecursionUser 1 User 2 User 3Stage 1Stage 2Stage 3Forward RecursionUser 1 User 2 User 3Stage 1Stage 2Stage 3Example (forward recursion) A pipeline is to be laid between node G and C shown in the figure below. The pipeline can pass only along the routes shown by solid lines between intermediate nodes in the figure. The distance between two nodes is shown in the line joining the two nodes. Obtain the shortest distance for the pipeline using dynamic programming.A BCD EFG HI1312125187101813 1511 6Solution Solution The shortest route G-D-A-B-C Distance = 42Example (Backward recursion) Inflow during four seasons to a reservoir with storage capacity of 4 units are, respectively, 2,1,3, and 2 units. Only discrete values (0,1,2, ..) are considered for storage and release. Overflows from the reservoir are also included in the release. Reservoir storage at the beginning of the year is 0 units. Release from the reservoir during a season result in the following benefit which are same for all the four seasons.Release Benefits0 -1001 2502 3203 4804 5205 5206 4107 120 Solution R1=1 R2=1 R3=3 R4=3-benefit= 1460 Thank you