Transcription

1 Dynamic Programming and Optimal Control 3rd Edition, Volume II by Dimitri P. Bertsekas Massachusetts Institute of Technology Chapter 6 Approximate Dynamic Programming This is an updated version of the research-oriented Chapter 6 on Approximate Dynamic Programming. It will be periodically updated as new research becomes available, and will replace the current Chapter 6 in the book s next printing. In addition to editorial revisions, rearrangements, and new exercises, the chapter includes an account of new research, which is collected mostly in Sections 6.3 and 6.8. Furthermore, a lot of new material has been added, such as an account of post-decision state simplifications (Section 6.1), regression-based TD methods (Section 6.3), feature scaling (Section 6.3), policy oscillations (Section 6.3), λ-policy iteration and exploration enhanced TD methods, aggregation methods (Section 6.4), new Q-learning algorithms (Section 6.5), and Monte Carlo linear algebra (Section 6.8). This chapter represents work in progress. It more than likely contains errors (hopefully not serious ones). Furthermore, its references to the literature are incomplete. Your comments and suggestions to the author at are welcome. The date of last revision is given below. November 11, 2011

4 Sec In this chapter we consider approximation methods for challenging, computationally intensive DP problems. We discussed a number of such methods in Chapter 6 of Vol. I and Chapter 1 of the present volume, such as for example rollout and other one-step lookahead approaches. Here our focus will be on algorithms that are mostly patterned after two principal methods of infinite horizon DP: policy and value iteration. These algorithms form the core of a methodology known by various names, such as approximate dynamic programming, or neuro-dynamic programming, or reinforcement learning. A principal aim of the methods of this chapter is to address problems with very large number of states n. In such problems, ordinary linear algebra operations such as n-dimensional inner products, are prohibitively time-consuming, and indeed it may be impossible to even store an n-vector in a computer memory. Our methods will involve linear algebra operations of dimension much smaller than n, and require only that the components of n-vectors are just generated when needed rather than stored. Another aim of the methods of this chapter is to address model-free situations, i.e., problems where a mathematical model is unavailable or hard to construct. Instead, the system and cost structure may be simulated (think, for example, of a queueing network with complicated but well-defined service disciplines at the queues). The assumption here is that there is a computer program that simulates, for a given control u, the probabilistic transitions from any given state i to a successor state j according to the transition probabilities p ij (u), and also generates a corresponding transition cost g(i, u, j). Given a simulator, it may be possible to use repeated simulation to calculate (at least approximately) the transition probabilities of the system and the expected stage costs by averaging, and then to apply the methods discussed in earlier chapters. The methods of this chapter, however, are geared towards an alternative possibility, which is much more attractive when one is faced with a large and complex system, and one contemplates approximations. Rather than estimate explicitly the transition probabilities and costs, we will aim to approximate the cost function of a given policy or even the optimal cost-to-go function by generating one or more simulated system trajectories and associated costs, and by using some form of least squares fit. Implicit in the rationale of methods based on cost function approximation is of course the hypothesis that a more accurate cost-to-go approximation will yield a better one-step or multistep lookahead policy. This is a reasonable but by no means self-evident conjecture, and may in fact not even be true in a given problem. In another type of method, which we will discuss somewhat briefly, we use simulation in conjunction with a gradient or other method to approximate directly an optimal policy with a policy of a given parametric form. This type of method does not aim at good cost function approximation through which a well-performing policy

5 324 Approximate Dynamic Programming Chap. 6 may be obtained. Rather it aims directly at finding a policy with good performance. Let us also mention, two other approximate DP methods, which we have discussed at various points in other parts of the book, but we will not consider further: rollout algorithms (Sections 6.4, 6.5 of Vol. I, and Section of Vol. II), and approximate linear programming (Section 1.3.4). Our main focus will be on two types of methods: policy evaluation algorithms, which deal with approximation of the cost of a single policy (and can also be embedded within a policy iteration scheme), and Q-learning algorithms, which deal with approximation of the optimal cost. Let us summarize each type of method, focusing for concreteness on the finitestate discounted case. Policy Evaluation Algorithms With this class of methods, we aim to approximate the cost function J µ (i) of a policy µ with a parametric architecture of the form J(i, r), where r is a parameter vector (cf. Section of Vol. I). This approximation may be carried out repeatedly, for a sequence of policies, in the context of a policy iteration scheme. Alternatively, it may be used to construct an approximate cost-to-go function of a single suboptimal/heuristic policy, which can be used in an on-line rollout scheme, with one-step or multistep lookahead. We focus primarily on two types of methods. In the first class of methods, called direct, we use simulation to collect samples of costs for various initial states, and fit the architecture J to the samples through some least squares problem. This problem may be solved by several possible algorithms, including linear least squares methods based on simple matrix inversion. Gradient methods have also been used extensively, and will be described in Section 6.2. The second and currently more popular class of methods is called indirect. Here, we obtain r by solving an approximate version of Bellman s equation. We will focus exclusively on the case of a linear architecture, where J is of the form Φr, and Φ is a matrix whose columns can be viewed as basis functions (cf. Section of Vol. I). In an important method of In another type of policy evaluation method, often called the Bellman equation error approach, which we will discuss briefly in Section 6.8.4, the parameter vector r is determined by minimizing a measure of error in satisfying Bellman s equation; for example, by minimizing over r J T J, where is some norm. If is a Euclidean norm, and J(i, r) is linear in r, this minimization is a linear least squares problem.

6 Sec this type, we obtain the parameter vector r by solving the equation Φr = ΠT(Φr), (6.1) where Π denotes projection with respect to a suitable norm on the subspace of vectors of the form Φr, and T is either the mapping T µ or a related mapping, which also has J µ as its unique fixed point [here ΠT(Φr) denotes the projection of the vector T(Φr) on the subspace]. We can view Eq. (6.1) as a form of projected Bellman equation. We will show that for a special choice of the norm of the projection, ΠT is a contraction mapping, so the projected Bellman equation has a unique solution Φr. We will discuss several iterative methods for finding r in Section 6.3. All these methods use simulation and can be shown to converge under reasonable assumptions to r, so they produce the same approximate cost function. However, they differ in their speed of convergence and in their suitability for various problem contexts. Here are the methods that we will focus on in Section 6.3 for discounted problems, and also in Sections for other types of problems. They all depend on a parameter λ [0, 1], whose role will be discussed later. (1) TD(λ) or temporal differences method. This algorithm may be viewed as a stochastic iterative method for solving a version of the projected equation (6.1) that depends on λ. The algorithm embodies important ideas and has played an important role in the development of the subject, but in practical terms, it is usually inferior to the next two methods, so it will be discussed in less detail. (2) LSTD(λ) or least squares temporal differences method. This algorithm computes and solves a progressively more refined simulationbased approximation to the projected Bellman equation (6.1). (3) LSPE(λ) or least squares policy evaluation method. This algorithm is based on the idea of executing value iteration within the lower dimensional space spanned by the basis functions. Conceptually, it has the form Φr k+1 = ΠT(Φr k ) + simulation noise, (6.2) Another method of this type is based on aggregation (cf. Section of Vol. I) and is discussed in Section 6.4. This approach can also be viewed as a problem approximation approach (cf. Section of Vol. I): the original problem is approximated with a related aggregate problem, which is then solved exactly to yield a cost-to-go approximation for the original problem. The aggregation counterpart of the equation Φr = ΠT(Φr) has the form Φr = ΦDT(Φr), where Φ and D are matrices whose rows are restricted to be probability distributions (the aggregation and disaggregation probabilities, respectively).

7 326 Approximate Dynamic Programming Chap. 6 i.e., the current value iterate T(Φr k ) is projected on S and is suitably approximated by simulation. The simulation noise tends to 0 asymptotically, so assuming that ΠT is a contraction, the method converges to the solution of the projected Bellman equation (6.1). There are also a number of variants of LSPE(λ). Both LSPE(λ) and its variants have the same convergence rate as LSTD(λ), because they share a common bottleneck: the slow speed of simulation. Q-Learning Algorithms With this class of methods, we aim to compute, without any approximation, the optimal cost function (not just the cost function of a single policy). Q- learning maintains and updates for each state-control pair (i, u) an estimate of the expression that is minimized in the right-hand side of Bellman s equation. This is called the Q-factor of the pair (i, u), and is denoted by Q (i, u). The Q-factors are updated with what may be viewed as a simulation-based form of value iteration, as will be explained in Section 6.5. An important advantage of using Q-factors is that when they are available, they can be used to obtain an optimal control at any state i simply by minimizing Q (i, u) over u U(i), so the transition probabilities of the problem are not needed. On the other hand, for problems with a large number of state-control pairs, Q-learning is often impractical because there may be simply too many Q-factors to update. As a result, the algorithm is primarily suitable for systems with a small number of states (or for aggregated/few-state versions of more complex systems). There are also algorithms that use parametric approximations for the Q-factors (see Section 6.5), although their theoretical basis is generally less solid. Chapter Organization Throughout this chapter, we will focus almost exclusively on perfect state information problems, involving a Markov chain with a finite number of states i, transition probabilities p ij (u), and single stage costs g(i, u, j). Extensions of many of the ideas to continuous state spaces are possible, but they are beyond our scope. We will consider first, in Sections , the discounted problem using the notation of Section 1.3. Section 6.1 provides a broad overview of cost approximation architectures and their uses in approximate policy iteration. Section 6.2 focuses on direct methods for policy evaluation. Section 6.3 is a long section on a major class of indirect methods for policy evaluation, which are based on the projected Bellman equation. Section 6.4 discusses methods based on aggregation. Section 6.5 discusses Q-learning and its variations, and extends the projected Bellman equation approach to the case of multiple policies, and particularly to optimal stopping problems. Stochastic shortest path and average cost problems

8 Sec. 6.1 General Issues of Cost Approximation 327 are discussed in Sections 6.6 and 6.7, respectively. Section 6.8 extends and elaborates on the projected Bellman equation approach of Sections 6.3, 6.6, and 6.7, discusses another approach based on the Bellman equation error, and generalizes the aggregation methodology. Section 6.9 describes methods based on parametric approximation of policies rather than cost functions. 6.1 GENERAL ISSUES OF COST APPROXIMATION Most of the methodology of this chapter deals with approximation of some type of cost function (optimal cost, cost of a policy, Q-factors, etc). The purpose of this section is to highlight the main issues involved, without getting too much into the mathematical details. We start with general issues of parametric approximation architectures, which we have also discussed in Vol. I (Section 6.3.5). We then consider approximate policy iteration (Section 6.1.2), and the two general approaches for approximate cost evaluation (direct and indirect; Section 6.1.3). In Section 6.1.4, we discuss various special structures that can be exploited to simplify approximate policy iteration. In Sections and we provide orientation into the main mathematical issues underlying the methodology, and focus on two of its main components: contraction mappings and simulation Approximation Architectures The major use of cost approximation is for obtaining a one-step lookahead suboptimal policy (cf. Section 6.3 of Vol. I). In particular, suppose that we use J(j, r) as an approximation to the optimal cost of the finite-state discounted problem of Section 1.3. Here J is a function of some chosen form (the approximation architecture) and r is a parameter/weight vector. Once r is determined, it yields a suboptimal control at any state i via the one-step lookahead minimization µ(i) = arg min p ij (u) ( g(i, u, j) + α J(j, r) ). (6.3) u U(i) The degree of suboptimality of µ, as measured by J µ J, is bounded by a constant multiple of the approximation error according to J µ J 2α 1 α J J, We may also use a multiple-step lookahead minimization, with a cost-to-go approximation at the end of the multiple-step horizon. Conceptually, single-step and multiple-step lookahead approaches are similar, and the cost-to-go approximation algorithms of this chapter apply to both.

9 328 Approximate Dynamic Programming Chap. 6 as shown in Prop This bound is qualitative in nature, as it tends to be quite conservative in practice. An alternative possibility is to obtain a parametric approximation Q(i, u, r) of the Q-factor of the pair (i, u), defined in terms of the optimal cost function J as Q (i, u) = p ij (u) ( g(i, u, j) + αj (j) ). Since Q (i, u) is the expression minimized in Bellman s equation, given the approximation Q(i, u, r), we can generate a suboptimal control at any state i via µ(i) = arg min Q(i, u, r). u U(i) The advantage of using Q-factors is that in contrast with the minimization (6.3), the transition probabilities p ij (u) are not needed in the above minimization. Thus Q-factors are better suited to the model-free context. Note that we may similarly use approximations to the cost functions J µ and Q-factors Q µ (i, u) of specific policies µ. A major use of such approximations is in the context of an approximate policy iteration scheme; see Section The choice of architecture is very significant for the success of the approximation approach. One possibility is to use the linear form J(i, r) = s r k φ k (i), (6.4) k=1 where r = (r 1,..., r s ) is the parameter vector, and φ k (i) are some known scalars that depend on the state i. Thus, for each state i, the approximate cost J(i, r) is the inner product φ(i) r of r and φ 1 (i) φ(i) =. φ s (i). We refer to φ(i) as the feature vector of i, and to its components as features (see Fig ). Thus the cost function is approximated by a vector in the subspace S = {Φr r R s }, where Φ = φ 1 (1)... φ s (1)... φ 1 (n)... φ s (n) = φ(1).. φ(n)

10 Sec. 6.1 General Issues of Cost Approximation 329 i) Linear Cost State Feature i i Feature Extraction Extraction Mapping Mapping Feature Feature Vector Vector φ(i) Vector Linear i) Linear Cost Cost Approximator φ(i) r Feature Approximator Extraction( Mapping ) Feature Vector Feature Extraction Mapping Feature Vector Figure A linear feature-based architecture. It combines a mapping that extracts the feature vector φ(i) = ( φ 1 (i),..., φ s(i) ) associated with state i, and a parameter vector r to form a linear cost approximator. We can view the s columns of Φ as basis functions, and Φr as a linear combination of basis functions. Features, when well-crafted, can capture the dominant nonlinearities of the cost function, and their linear combination may work very well as an approximation architecture. For example, in computer chess (Section of Vol. I) where the state is the current board position, appropriate features are material balance, piece mobility, king safety, and other positional factors. Example (Polynomial Approximation) An important example of linear cost approximation is based on polynomial basis functions. Suppose that the state consists of q integer components x 1,..., x q, each taking values within some limited range of integers. For example, in a queueing system, x k may represent the number of customers in the kth queue, where k = 1,..., q. Suppose that we want to use an approximating function that is quadratic in the components x k. Then we can define a total of 1 + q + q 2 basis functions that depend on the state x = (x 1,..., x q) via φ 0(x) = 1, φ k (x) = x k, φ km (x) = x k x m, k, m = 1,..., q. A linear approximation architecture that uses these functions is given by J(x, r) = r 0 + q r k x k + k=1 q k=1 m=k q r km x k x m, where the parameter vector r has components r 0, r k, and r km, with k = 1,..., q, m = k,..., q. In fact, any kind of approximating function that is polynomial in the components x 1,..., x q can be constructed similarly. It is also possible to combine feature extraction with polynomial approximations. For example, the feature vector φ(i) = ( φ 1(i),..., φ ) s(i) transformed by a quadratic polynomial mapping, leads to approximating functions of the form J(i, r) = r 0 + s r k φ k (i) + k=1 s k=1 l=1 s r kl φ k (i)φ l (i),

11 330 Approximate Dynamic Programming Chap. 6 where the parameter vector r has components r 0, r k, and r kl, with k, l = 1,..., s. This function can be viewed as a linear cost approximation that uses the basis functions w 0(i) = 1, w k (i) = φ k (i), w kl (i) = φ k (i)φ l (i), k, l = 1,..., s. Example (Interpolation) A common type of approximation of a function J is based on interpolation. Here, a set I of special states is selected, and the parameter vector r has one component r i per state i I, which is the value of J at i: r i = J(i), i I. The value of J at states i / I is approximated by some form of interpolation using r. Interpolation may be based on geometric proximity. For a simple example that conveys the basic idea, let the system states be the integers within some interval, let I be a subset of special states, and for each state i let i and ī be the states in I that are closest to i from below and from above. Then for any state i, J(i, r) is obtained by linear interpolation of the costs ri = J(i) and r ī = J(ī): J(i, r) = i i ī i ri + ī i ī i rī. The scalars multiplying the components of r may be viewed as features, so the feature vector of i above consists of two nonzero features (the ones corresponding to i and ī), with all other features being 0. Similar examples can be constructed for the case where the state space is a subset of a multidimensional space (see Example of Vol. I). A generalization of the preceding example is approximation based on aggregation; see Section of Vol. I and the subsequent Section 6.4 in this chapter. There are also interesting nonlinear approximation architectures, including those defined by neural networks, perhaps in combination with feature extraction mappings (see Bertsekas and Tsitsiklis [BeT96], or Sutton and Barto [SuB98] for further discussion). In this chapter, we will mostly focus on the case of linear architectures, because many of the policy evaluation algorithms of this chapter are valid only for that case. We note that there has been considerable research on automatic basis function generation approaches (see e.g., Keller, Mannor, and Precup [KMP06], and Jung and Polani [JuP07]). Moreover it is possible to use standard basis functions which may be computed by simulation (perhaps with simulation error). The following example discusses this possibility.

12 Sec. 6.1 General Issues of Cost Approximation 331 Example (Krylov Subspace Generating Functions) We have assumed so far that the columns of Φ, the basis functions, are known, and the rows φ(i) of Φ are explicitly available to use in the various simulationbased formulas. We will now discuss a class of basis functions that may not be available, but may be approximated by simulation in the course of various algorithms. For concreteness, let us consider the evaluation of the cost vector J µ = (I αp µ) 1 g µ of a policy µ in a discounted MDP. Then J µ has an expansion of the form J µ = α t Pµg t µ. t=0 Thus g µ, P µg µ,..., Pµg s µ yield an approximation based on the first s+1 terms of the expansion, and seem suitable choices as basis functions. Also a more general expansion is J µ = J + α t Pµq, t t=0 where J is any vector in R n and q is the residual vector q = T µj J = g µ + αp µj J; this can be seen from the equation J µ J = αp µ(j µ J)+q. Thus the basis functions J, q, P µq,..., Pµ s 1 q yield an approximation based on the first s + 1 terms of the preceding expansion. Generally, to implement various methods in subsequent sections with basis functions of the form Pµ m g µ, m 0, one would need to generate the ith components (Pµ m g µ)(i) for any given state i, but these may be hard to calculate. However, it turns out that one can use instead single sample approximations of (Pµ m g µ)(i), and rely on the averaging mechanism of simulation to improve the approximation process. The details of this are beyond our scope and we refer to the original sources (Bertsekas and Yu [BeY07], [BeY09]) for further discussion and specific implementations. We finally mention the possibility of optimal selection of basis functions within some restricted class. In particular, consider an approximation subspace S θ = { Φ(θ)r r R s}, where the s columns of the n s matrix Φ are basis functions parametrized by a vector θ. Assume that for a given θ, there is a corresponding vector r(θ), obtained using some algorithm, so that Φ(θ)r(θ) is an approximation of a cost function J (various such algorithms will be presented later in this chapter). Then we may wish to select θ so that some measure of approximation quality is optimized. For example, suppose that we can

13 332 Approximate Dynamic Programming Chap. 6 compute the true cost values J(i) (or more generally, approximations to these values) for a subset of selected states I. Then we may determine θ so that ( J(i) φ(i, θ) r(θ) ) 2 i I is minimized, where φ(i, θ) is the ith row of Φ(θ). Alternatively, we may determine θ so that the norm of the error in satisfying Bellman s equation, Φ(θ)r(θ) T ( Φ(θ)r(θ) ) 2, is minimized. Gradient and random search algorithms for carrying out such minimizations have been proposed in the literature (see Menache, Mannor, and Shimkin [MMS06], and Yu and Bertsekas [YuB09]) Approximate Policy Iteration Let us consider a form of approximate policy iteration, where we compute simulation-based approximations J(, r) to the cost functions J µ of stationary policies µ, and we use them to compute new policies based on (approximate) policy improvement. We impose no constraints on the approximation architecture, so J(i, r) may be linear or nonlinear in r. Suppose that the current policy is µ, and for a given r, J(i, r) is an approximation of J µ (i). We generate an improved policy µ using the formula µ(i) = arg min u U(i) p ij (u) ( g(i, u, j) + α J(j, r) ), for all i. (6.5) The method is illustrated in Fig Its theoretical basis was discussed in Section 1.3 (cf. Prop ), where it was shown that if the policy evaluation is accurate to within δ (in the sup-norm sense), then for an α-discounted problem, the method will yield in the limit (after infinitely many policy evaluations) a stationary policy that is optimal to within 2αδ (1 α) 2, where α is the discount factor. Experimental evidence indicates that this bound is usually conservative. Furthermore, often just a few policy evaluations are needed before the bound is attained. When the sequence of policies obtained actually converges to some ˆµ, then it can be proved that ˆµ is optimal to within 2αδ 1 α

14 Sec. 6.1 General Issues of Cost Approximation 333 Guess Initial Policy Evaluate Approximate Cost Φr Using Approximate SimulationPolicy Asynchronous r Using Simulation Initial state Initial state Evaluation Initial state ( Generate Improved Policy µ Policy Improvement Figure Block diagram of approximate policy iteration. (see Section and also Section 6.4.2, where it is shown that if policy evaluation is done using an aggregation approach, the generated sequence of policies does converge). A simulation-based implementation of the algorithm is illustrated in Fig It consists of four parts: (a) The simulator, which given a state-control pair (i, u), generates the next state j according to the system s transition probabilities. (b) The decision generator, which generates the control µ(i) of the improved policy at the current state i for use in the simulator. (c) The cost-to-go approximator, which is the function J(j, r) that is used by the decision generator. (d) The cost approximation algorithm, which accepts as input the output of the simulator and obtains the approximation J(, r) of the cost of µ. Note that there are two policies µ and µ, and parameter vectors r and r, which are simultaneously involved in this algorithm. In particular, r corresponds to the current policy µ, and the approximation J(, r) is used in the policy improvement Eq. (6.5) to generate the new policy µ. At the same time, µ drives the simulation that generates samples to be used by the algorithm that determines the parameter r corresponding to µ, which will be used in the next policy iteration. The Issue of Exploration Let us note an important generic difficulty with simulation-based policy iteration: to evaluate a policy µ, we need to generate cost samples using that policy, but this biases the simulation by underrepresenting states that

15 334 Approximate Dynamic Programming Chap. 6 System Simulator D Cost-to-Go Approx r) Samples i Cost Approximation A J(j, r) n Algorithm r Decision Generator r) Decision µ(i) Sroximator Supplies Valu State i C ecision Cost-to-Go GeneratorApproximator S rstate Supplies Cost Values Approximation J(j, r) D Figure Simulation-based implementation approximate policy iteration algorithm. Given the approximation J(i, r), we generate cost samples of the improved policy µ by simulation (the decision generator module). We use these samples to generate the approximator J(i, r) of µ. are unlikely to occur under µ. As a result, the cost-to-go estimates of these underrepresented states may be highly inaccurate, causing potentially serious errors in the calculation of the improved control policy µ via the policy improvement Eq. (6.5). The difficulty just described is known as inadequate exploration of the system s dynamics because of the use of a fixed policy. It is a particularly acute difficulty when the system is deterministic, or when the randomness embodied in the transition probabilities is relatively small. One possibility for guaranteeing adequate exploration of the state space is to frequently restart the simulation and to ensure that the initial states employed form a rich and representative subset. A related approach, called iterative resampling, is to enrich the sampled set of states in evaluating the current policy µ as follows: derive an initial cost evaluation of µ, simulate the next policy µ obtained on the basis of this initial evaluation to obtain a set of representative states S visited by µ, and repeat the evaluation of µ using additional trajectories initiated from S. Still another frequently used approach is to artificially introduce some extra randomization in the simulation, by occasionally using a randomly generated transition rather than the one dictated by the policy µ (although this may not necessarily work because all admissible controls at a given state may produce similar successor states). This and other possibilities to improve exploration will be discussed further in Section

16 Sec. 6.1 General Issues of Cost Approximation 335 Limited Sampling/Optimistic Policy Iteration In the approximate policy iteration approach discussed so far, the policy evaluation of the cost of the improved policy µ must be fully carried out. An alternative, known as optimistic policy iteration, is to replace the policy µ with the policy µ after only a few simulation samples have been processed, at the risk of J(, r) being an inaccurate approximation of J µ. Optimistic policy iteration has been successfully used, among others, in an impressive backgammon application (Tesauro [Tes92]). However, the associated theoretical convergence properties are not fully understood. As will be illustrated by the discussion of Section (see also Section of [BeT96]), optimistic policy iteration can exhibit fascinating and counterintuitive behavior, including a natural tendency for a phenomenon called chattering, whereby the generated parameter sequence {r k } converges, while the generated policy sequence oscillates because the limit of {r k } corresponds to multiple policies. We note that optimistic policy iteration tends to deal better with the problem of exploration discussed earlier, because with rapid changes of policy, there is less tendency to bias the simulation towards particular states that are favored by any single policy. Approximate Policy Iteration Based on Q-Factors The approximate policy iteration method discussed so far relies on the calculation of the approximation J(, r) to the cost function J µ of the current policy, which is then used for policy improvement using the minimization µ(i) = arg min p ij (u) ( g(i, u, j) + α J(j, r) ). u U(i) Carrying out this minimization requires knowledge of the transition probabilities p ij (u) and calculation of the associated expected values for all controls u U(i) (otherwise a time-consuming simulation of these expected values is needed). A model-free alternative is to compute approximate Q- factors Q(i, u, r) p ij (u) ( g(i, u, j) + αj µ (j) ), (6.6) and use the minimization µ(i) = arg min Q(i, u, r) (6.7) u U(i) for policy improvement. Here, r is an adjustable parameter vector and Q(i, u, r) is a parametric architecture, possibly of the linear form s Q(i, u, r) = r k φ k (i, u), k=1

17 336 Approximate Dynamic Programming Chap. 6 where φ k (i, u) are basis functions that depend on both state and control [cf. Eq. (6.4)]. The important point here is that given the current policy µ, we can construct Q-factor approximations Q(i, u, r) using any method for constructing cost approximations J(i, r). The way to do this is to apply the latter method to the Markov chain whose states are the pairs (i, u), and the probability of transition from (i, u) to (j, v) is p ij (u) if v = µ(j), and is 0 otherwise. This is the probabilistic mechanism by which statecontrol pairs evolve under the stationary policy µ. A major concern with this approach is that the state-control pairs (i, u) with u µ(i) are never generated in this Markov chain, so they are not represented in the cost samples used to construct the approximation Q(i, u, r) (see Fig ). This creates an acute difficulty due to diminished exploration, which must be carefully addressed in any simulation-based implementation. We will return to the use of Q-factors in Section 6.5, where we will discuss exact and approximate implementations of the Q- learning algorithm. State-Control Pairs: Fixed Policy µ State-Control Pairs (i, u) States ) g(i,u,j) j p ij (u) ) States j p v µ(j) j) ( j,µ(j) ) i,u) States Figure Markov chain underlying Q-factor-based policy evaluation, associated with policy µ. The states are the pairs (i, u), and the probability of transition from (i, u) to (j, v) is p ij (u) if v = µ(j), and is 0 otherwise. Thus, after the first transition, the generated pairs are exclusively of the form (i, µ(i)); pairs of the form (i, u), u µ(i), are not explored. The Issue of Policy Oscillations Contrary to exact policy iteration, which converges to an optimal policy in a fairly regular manner, approximate policy iteration may oscillate. By this we mean that after a few iterations, policies tend to repeat in cycles. The associated parameter vectors r may also tend to oscillate. This phenomenon is explained in Section and can be particularly damaging,

18 Sec. 6.1 General Issues of Cost Approximation 337 because there is no guarantee that the policies involved in the oscillation are good policies, and there is often no way to verify how well they perform relative to the optimal. We note that oscillations can be avoided and approximate policy iteration can be shown to converge under special conditions that arise in particular when aggregation is used for policy evaluation. These conditions involve certain monotonicity assumptions regarding the choice of the matrix Φ, which are fulfilled in the case of aggregation (see Section 6.3.8, and also Section 6.4.2). However, when Φ is chosen in an unrestricted manner, as often happens in practical applications of the projected equation methods of Section 6.3, policy oscillations tend to occur generically, and often for very simple problems (see Section for an example) Direct and Indirect Approximation We will now preview two general algorithmic approaches for approximating the cost function of a fixed stationary policy µ within a subspace of the form S = {Φr r R s }. (A third approach, based on aggregation, uses a special type of matrix Φ and is discussed in Section 6.4.) The first and most straightforward approach, referred to as direct, is to find an approximation J S that matches best J µ in some normed error sense, i.e., min J µ J, J S or equivalently, min J r R s µ Φr (see the left-hand side of Fig ). Here, is usually some (possibly weighted) Euclidean norm, in which case the approximation problem is a linear least squares problem, whose solution, denoted r, can in principle be obtained in closed form by solving the associated quadratic minimization problem. If the matrix Φ has linearly independent columns, the solution is unique and can also be represented as Φr = ΠJ µ, where Π denotes projection with respect to on the subspace S. A major difficulty is that specific cost function values J µ (i) can only be estimated Note that direct approximation may be used in other approximate DP contexts, such as finite horizon problems, where we use sequential single-stage approximation of the cost-to-go functions J k, going backwards (i.e., starting with J N, we obtain a least squares approximation of J N 1, which is used in turn to obtain a least squares approximation of J N 2, etc). This approach is sometimes called fitted value iteration. In what follows in this chapter, we will not distinguish between the linear operation of projection and the corresponding matrix representation, denoting them both by Π. The meaning should be clear from the context.

19 338 Approximate Dynamic Programming Chap. 6 Direct Method: Projection of cost vector J µ Π T µ(φr) = 0 µ ΠJ µ Subspace S = {Φr r R s } Set = 0 Φr = ΠT µ(φr) Subspace S = {Φr r R s } Set Direct Method: Projection of cost vector Indirect Method: Solving a projected form of Bellman s equation Direct Method: Projection of ( ) cost ( vector ) Indirect ( J µ ) Method: Solving a projected form Projection of Bellman s on equation Figure Two methods for approximating the cost function J µ as a linear combination of basis functions (subspace S). In the direct method (figure on the left), J µ is projected on S. In the indirect method (figure on the right), the approximation is found by solving Φr = ΠT µ(φr), a projected form of Bellman s equation. through their simulation-generated cost samples, as we discuss in Section 6.2. An alternative and more popular approach, referred to as indirect, is to approximate the solution of Bellman s equation J = T µ J on the subspace S (see the right-hand side of Fig ). An important example of this approach, which we will discuss in detail in Section 6.3, leads to the problem of finding a vector r such that Φr = ΠT µ (Φr ). (6.8) We can view this equation as a projected form of Bellman s equation. We will consider another type of indirect approach based on aggregation in Section 6.4. We note that solving projected equations as approximations to more complex/higher-dimensional equations has a long history in scientific computation in the context of Galerkin methods (see e.g., [Kra72]). For example, some of the most popular finite-element methods for partial differential equations are of this type. However, the use of the Monte Carlo simulation ideas that are central in approximate DP is an important characteristic that differentiates the methods of the present chapter from the Galerkin methodology. An important fact here is that ΠT µ is a contraction, provided we use a special weighted Euclidean norm for projection, as will be proved in Section 6.3 for discounted problems (Prop ). In this case, Eq. (6.8) has a unique solution, and allows the use of algorithms such as LSPE(λ) and TD(λ), which are discussed in Section 6.3. Unfortunately, the contraction property of ΠT µ does not extend to the case where T µ is replaced by

20 Sec. 6.1 General Issues of Cost Approximation 339 T, the DP mapping corresponding to multiple/all policies, although there are some interesting exceptions, one of which relates to optimal stopping problems and is discussed in Section Simplifications We now consider various situations where the special structure of the problem may be exploited to simplify policy iteration or other approximate DP algorithms. Problems with Uncontrollable State Components In many problems of interest the state is a composite (i, y) of two components i and y, and the evolution of the main component i can be directly affected by the control u, but the evolution of the other component y cannot. Then as discussed in Section 1.4 of Vol. I, the value and the policy iteration algorithms can be carried out over a smaller state space, the space of the controllable component i. In particular, we assume that given the state (i, y) and the control u, the next state (j, z) is determined as follows: j is generated according to transition probabilities p ij (u, y), and z is generated according to conditional probabilities p(z j) that depend on the main component j of the new state (see Fig ). Let us assume for notational convenience that the cost of a transition from state (i, y) is of the form g(i, y, u, j) and does not depend on the uncontrollable component z of the next state (j, z). If g depends on z it can be replaced by ĝ(i, y, u, j) = z p(z j)g(i, y, u, j, z) in what follows. (i,y) ( ) States ) (j,z) States ) Control u j p ij (u) j g(i,y,u,j) ) States j p ) No Control u p(z j) Controllable State Components Figure States and transition probabilities for a problem with uncontrollable state components.

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

Copyright c 2009 by Karl Sigman Limiting distribution for a Markov chain In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n In particular, under suitable easy-to-check

8.7. MATHEMATICAL INDUCTION 8-135 8.7 Mathematical Induction Objective Prove a statement by mathematical induction Many mathematical facts are established by first observing a pattern, then making a conjecture

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Department of Industrial Engineering and Management Sciences Northwestern University September 15th, 2014

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

Chapter 4 Spline Curves A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design and control the shape of complex curves and surfaces.

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

CHAPTER 2 The Basics of FEA Procedure 2.1 Introduction This chapter discusses the spring element, especially for the purpose of introducing various concepts involved in use of the FEA technique. A spring

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document

Using simulation to calculate the NPV of a project Marius Holtan Onward Inc. 5/31/2002 Monte Carlo simulation is fast becoming the technology of choice for evaluating and analyzing assets, be it pure financial

IMPERIAL COLLEGE LONDON Department of Computing Approximate Dynamic Programming: Playing Tetris and Routing Under Uncertainty Michael Hadjiyiannis Supervised by Dr Daniel Kuhn 06/16/2009 MEng Final Year

LECTURE NOTES: FINITE ELEMENT METHOD AXEL MÅLQVIST. Motivation The finite element method has two main strengths... Geometry. Very complex geometries can be used. This is probably the main reason why finite

Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING 1. Introduction The Black-Scholes theory, which is the main subject of this course and its sequel, is based on the Efficient Market Hypothesis, that arbitrages

At the end of the lesson, you should be able to: Chapter 2: Systems of Linear Equations and Matrices: 2.1: Solutions of Linear Systems by the Echelon Method Define linear systems, unique solution, inconsistent,

Appendix A Appendix A.1 Algebra Algebra is the foundation of algebraic geometry; here we collect some of the basic algebra on which we rely. We develop some algebraic background that is needed in the text.

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

Logical Representations and Computational Methods for Marov Decision Processes Craig Boutilier Department of Computer Science University of Toronto Planning in Artificial Intelligence Planning has a long

The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor