Abstract (inglese)

This thesis focuses on the unconstrained and constrained minimum time problems, in particular on regularity, numerical approximation, feedback and synthesis aspects.
We first consider the problem of This thesis focuses on the unconstrained and constrained minimum time problems, in particular on regularity, numerical approximation, feedback and synthesis aspects.
We first consider the problem of small-time local controllability for nonlinear finite-dimensional time-continuous control systems
in presence of state constraints. More precisely, given a nonlinear control system subject to state constraints and a closed set S, we provide sufficient conditions to steer to S every point of a suitable neighborhood of S along admissible trajectories of the system, respecting the constraints, and giving also an upper estimate of the minimum time needed for each point to reach the target.
Then in framework of control affine nonlinear systems, sufficient conditions to reach a target for a suitable discretization of a given dynamics are provided. We make use of an approach based on Hamilton-Jacobi theory to prove the convergence of the solution of a fully discrete scheme to the (true) minimum time function, together with error estimates. We also design an approximate suboptimal discrete feedback and provide an error estimate for the time to reach the target through the discrete dynamics generated by this feedback.
We next propose a new formulation of the minimum time problem in which we employ the signed minimum time function positive outside of the target, negative in its interior and zero on its boundary. Under some standard assumptions, we prove the so called Bridge Dynamic Programming Principle (BDPP) which is a relation between the value functions defined on the complement of the target and in its interior. Then owing to BDPP, we obtain the error estimates of a semi-Lagrangian discretization of the resulting Hamilton-Jacobi-Bellman equation.
The remainder of this thesis is devoted to introducing an approach to compute the approximate minimum time function of control problems which is based on reachable set approximation. In particular, the theoretical justification of the proposed approach is restricted to a class of linear control systems and uses arithmetic operations for convex sets. The error estimate of the fully discrete reachable set is provided by employing Hausdorff distance. The detailed procedure solving the corresponding discrete problem is described. Under standard assumptions, by means of convex analysis and knowledge of regularity of the true minimum time function, we estimate the error of its approximation. Finally, we reconstruct discrete suboptimal trajectories which reach a set of supporting points from a given target for a class of linear control problems and also proving the convergence of discrete optimal controls by the use of nonsmooth and variational analysis. small-time local controllability for nonlinear finite-dimensional time-continuous control systems in presence of state constraints. More precisely, given a nonlinear control system subject to state constraints and a closed set S, we provide sufficient conditions to steer to S every point of a suitable neighborhood of S along admissible trajectories of the system, respecting the constraints, and giving also an upper estimate of the minimum time needed for each point to reach the target.
Then in framework of control affine nonlinear systems, sufficient conditions to reach a target for a suitable discretization of a given dynamics are provided. We make use of an approach based on Hamilton-Jacobi theory to prove the convergence of the solution of a fully discrete scheme to the (true) minimum time function, together with error estimates. We also design an approximate suboptimal discrete feedback and provide an error estimate for the time to reach the target through the discrete dynamics generated by this feedback.
We next propose a new formulation of the minimum time problem in which we employ the signed minimum time function positive outside of the target, negative in its interior and zero on its boundary. Under some standard assumptions, we prove the so called Bridge Dynamic Programming Principle (BDPP) which is a relation between the value functions defined on the complement of the target and in its interior. Then owing to BDPP, we obtain the error estimates of a semi-Lagrangian discretization of the resulting Hamilton-Jacobi-Bellman equation.
The remainder of this thesis is devoted to introducing an approach to compute the approximate minimum time function of control problems which is based on reachable set approximation. In particular, the theoretical justification of the proposed approach is restricted to a class of linear control systems and uses arithmetic operations for convex sets. The error estimate of the fully discrete reachable set is provided by employing Hausdorff distance. The detailed procedure solving the corresponding discrete problem is described. Under standard assumptions, by means of convex analysis and knowledge of regularity of the true minimum time function, we estimate the error of its approximation. Finally, we reconstruct discrete suboptimal trajectories which reach a set of supporting points from a given target for a class of linear control problems and also proving the convergence of discrete optimal controls by the use of nonsmooth and variational analysis