چکیده انگلیسی

This paper deals with the stochastic control of nonlinear systems in the presence of state and control constraints, for uncertain discrete-time dynamics in finite dimensional spaces. In the deterministic case, the viability kernel is known to play a basic role for the analysis of such problems and the design of viable control feedbacks. In the present paper, we show how a stochastic viability kernel and viable feedbacks relying on probability (or chance) constraints can be defined and computed by a dynamic programming equation. An example illustrates most of the assertions.

مقدمه انگلیسی

Risk, vulnerability, safety or precaution constitute major issues in the management and control of dynamical systems. Regarding these motivations, the role played by the acceptability constraints or targets is central, and it has to be articulated with uncertainty and, in particular, with stochasticity when a probability distribution is given. The present paper addresses the issue of state and control constraints in the stochastic context. For the sake of simplicity, we consider noisy control dynamic systems. This is a natural extension of deterministic control systems, which covers a large class of situations. Thus we consider the following state equation as the uncertain dynamic model
equation(1)
View the MathML sourcex(t+1)=f(t,x(t),u(t),w(t)),t=t0,…,T−1,with x(t0)=x0
Turn MathJax on
where x(t)∈X=Rnx(t)∈X=Rn represents the system state vector at time tt, x0∈Xx0∈X is the initial condition at initial time t0t0, u(t)∈U=Rpu(t)∈U=Rp represents decision or control vector while w(t)∈W=Rqw(t)∈W=Rq stands for the uncertain variable, or disturbance, or noise.
The admissibility of decisions and states is first restricted by a non empty subset B(t,x)B(t,x) of admissible controls in UU for all (t,x)(t,x):
equation(2)
u(t)∈B(t,x(t))⊂U.u(t)∈B(t,x(t))⊂U.
Turn MathJax on
Similarly, the relevant states of the system are limited by a non empty subset A(t,w(t))A(t,w(t)) of the state space XX possibly uncertain for all tt,
equation(3)
x(t)∈A(t,w(t))⊂X,x(t)∈A(t,w(t))⊂X,
Turn MathJax on
and a target
equation(4)
x(T)∈A(T,w(T))⊂X.x(T)∈A(T,w(T))⊂X.
Turn MathJax on
We assume that
equation(5)
w(t)∈S(t)⊂W,w(t)∈S(t)⊂W,
Turn MathJax on
so that the sequences
equation(6)
w(⋅)≔(w(t0),w(t0+1),…,w(T−1),w(T))w(⋅)≔(w(t0),w(t0+1),…,w(T−1),w(T))
Turn MathJax on
belonging to
equation(7)
View the MathML sourceΩ≔S(t0)×⋯×S(T)⊂WT+1−t0
Turn MathJax on
capture the idea of possible scenarios for the problem. A scenario is an uncertainty trajectory.
These control, state or target constraints may reduce the relevant paths of the system (1). Such a feasibility issue can be addressed in a robust or stochastic framework. Here we focus on the stochastic case assuming that the domain of scenarios View the MathML sourceΩ is equipped with some probability PP. In this probabilistic setting, one can relax the constraint requirements (2), (3) and (4) by satisfying the state constraints along time with a given confidence level ββ
equation(8)
View the MathML sourceP(w(⋅)∈Ω∣x(t)∈A(t,w(t)) for t=t0,…,T)≥β
Turn MathJax on
by appropriate controls satisfying (2). Such probabilistic constraints are often called chance constraints in the stochastic literature as in [1] and [2]. We shall give proper mathematical content to the above formula in the following section. Concentrating now on motivation, the idea of stochastic viability is basically to require the respect of the constraints at a given confidence level ββ (say 90%, 99%). It implicitly assumes that some extreme events make irrelevant the robust approach [3] that is closely related to stochasticity with a confidence level 100%.
The problems of dynamic control under constraints usually refer to viability [4] or invariance [5] and [6] framework. Basically, such an approach focuses on inter-temporal feasible paths. From the mathematical viewpoint, most of viability and weak invariance results are addressed in the continuous time case. However, some mathematical works deal with the discrete-time case. This includes the study of numerical schemes for the approximation of the viability problems of the continuous dynamics as in [4] and [7]. Important contributions for discrete-time case are also captured by the study of the positivity for linear systems as in [8], or by the hybrid control as in [9] and [6] or [10]. Other references may be found in the control theory literature, such as [11] and [12] and the survey paper [13]. A large study focusing on the discrete-time case is also provided in [14].
Viability is defined as the ability to choose, at each time step, a control such that the system configuration remains admissible. The viability kernel associated with the dynamics and the constraints plays a major role regarding such issues. It is the set of initial states x0x0 from which starts an acceptable solution. For a decision maker or control designer, knowing the viability kernel has practical interest since it describes the states from which controls can be found that maintain the system in a desirable configuration forever. However, computing this kernel is not an easy task in general. Of major interest is the fact that a dynamic programming equation underlies the computation or approximation of viability kernels as pointed out in [4] and [14].
The present paper aims at expanding viability concepts and results in the stochastic case for discrete-time systems. In particular, we adapt the notions of viability kernel and viable controls in the probabilistic or chance constraint framework. Mathematical materials of stochastic viability can be found in [15], [16] and [17] but they rather focus on the continuous time case and cope with constraints satisfied almost surely. We here provide a dynamic programming and Bellman perspective for the probabilistic framework.
The paper is organized as follows. Section 2 is devoted to the statement of the probabilistic viability problem. Then, Section 3 exhibits the dynamic programming structure underlying such stochastic viability. An example is presented in Section 4 to illustrate some of the main findings.