Feedback Control Theory

An excellent introduction to feedback control system design, this book offers a theoretical approach that captures the essential issues and can be applied to a wide range of practical problems. Its explorations of recent developments in the field emphasize the relationship of new procedures to classical control theory. 1992 edition.

You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.

Feedback Control Theory
John Doyle, Bruce Francis, Allen Tannenbaum
c Macmillan Publishing Co., 1990
Contents
Preface
iii
1 Introduction
1.1 Issues in Control System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 What Is in This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Norms for Signals and Systems
2.1 Norms for Signals . . . . . . . . . . . . . . . . .
2.2 Norms for Systems . . . . . . . . . . . . . . . .
2.3 Input-Output Relationships . . . . . . . . . . .
2.4 Power Analysis (Optional) . . . . . . . . . . . .
2.5 Proofs for Tables 2.1 and 2.2 (Optional) . . . .
2.6 Computing by State-Space Methods (Optional)
3 Basic Concepts
3.1 Basic Feedback Loop
3.2 Internal Stability . .
3.3 Asymptotic Tracking
3.4 Performance . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
7
.
.
.
.
.
.
13
13
15
18
19
21
24
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
31
34
38
40
4 Uncertainty and Robustness
4.1 Plant Uncertainty . . . . . . . . . .
4.2 Robust Stability . . . . . . . . . . .
4.3 Robust Performance . . . . . . . . .
4.4 Robust Performance More Generally
4.5 Conclusion . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45
45
50
53
58
59
5 Stabilization
5.1 Controller Parametrization: Stable Plant . . . . . . . . . .
5.2 Coprime Factorization . . . . . . . . . . . . . . . . . . . .
5.3 Coprime Factorization by State-Space Methods (Optional)
5.4 Controller Parametrization: General Plant . . . . . . . . .
5.5 Asymptotic Properties . . . . . . . . . . . . . . . . . . . .
5.6 Strong and Simultaneous Stabilization . . . . . . . . . . .
5.7 Cart-Pendulum Example . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
63
65
69
71
73
75
81
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
i
6 Design Constraints
87
6.1 Algebraic Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2 Analytic Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
7 Loopshaping
7.1 The Basic Technique of Loopshaping . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 The Phase Formula (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
101
101
105
108
8 Advanced Loopshaping
8.1 Optimal Controllers . . . . . . .
8.2 Loopshaping with C . . . . . . .
8.3 Plants with RHP Poles and Zeros
8.4 Shaping S, T , or Q . . . . . . . .
8.5 Further Notions of Optimality . .
9 Model Matching
9.1 The Model-Matching Problem .
9.2 The Nevanlinna-Pick Problem .
9.3 Nevanlinna’s Algorithm . . . .
9.4 Solution of the Model-Matching
9.5 State-Space Solution (Optional)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
117
117
118
126
135
138
. . . . . .
. . . . . .
. . . . . .
Problem
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
149
149
150
154
158
160
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
163
163
168
170
175
.
.
.
.
181
181
185
187
192
.
.
.
.
195
195
196
198
204
10 Design for Performance
10.1 P −1 Stable . . . . . . . . . . . .
10.2 P −1 Unstable . . . . . . . . . . .
10.3 Design Example: Flexible Beam
10.4 2-Norm Minimization . . . . . .
11 Stability Margin Optimization
11.1 Optimal Robust Stability . .
11.2 Conformal Mapping . . . . .
11.3 Gain Margin Optimization . .
11.4 Phase Margin Optimization .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12 Design for Robust Performance
12.1 The Modified Problem . . . . . . . . . . . .
12.2 Spectral Factorization . . . . . . . . . . . .
12.3 Solution of the Modified Problem . . . . . .
12.4 Design Example: Flexible Beam Continued
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
209
Preface
Striking developments have taken place since 1980 in feedback control theory. The subject has become both more rigorous and more applicable. The rigor is not for its own sake, but rather that even
in an engineering discipline rigor can lead to clarity and to methodical solutions to problems. The
applicability is a consequence both of new problem formulations and new mathematical solutions
to these problems. Moreover, computers and software have changed the way engineering design is
done. These developments suggest a fresh presentation of the subject, one that exploits these new
developments while emphasizing their connection with classical control.
Control systems are designed so that certain designated signals, such as tracking errors and
actuator inputs, do not exceed pre-specified levels. Hindering the achievement of this goal are
uncertainty about the plant to be controlled (the mathematical models that we use in representing
real physical systems are idealizations) and errors in measuring signals (sensors can measure signals
only to a certain accuracy). Despite the seemingly obvious requirement of bringing plant uncertainty
explicitly into control problems, it was only in the early 1980s that control researchers re-established
the link to the classical work of Bode and others by formulating a tractable mathematical notion
of uncertainty in an input-output framework and developing rigorous mathematical techniques to
cope with it. This book formulates a precise problem, called the robust performance problem, with
the goal of achieving specified signal levels in the face of plant uncertainty.
The book is addressed to students in engineering who have had an undergraduate course in
signals and systems, including an introduction to frequency-domain methods of analyzing feedback
control systems, namely, Bode plots and the Nyquist criterion. A prior course on state-space theory
would be advantageous for some optional sections, but is not necessary. To keep the development
elementary, the systems are single-input/single-output and linear, operating in continuous time.
Chapters 1 to 7 are intended as the core for a one-semester senior course; they would need
supplementing with additional examples. These chapters constitute a basic treatment of feedback
design, containing a detailed formulation of the control design problem, the fundamental issue
of performance/stability robustness tradeoff, and the graphical design technique of loopshaping,
suitable for benign plants (stable, minimum phase). Chapters 8 to 12 are more advanced and
are intended for a first graduate course. Chapter 8 is a bridge to the latter half of the book,
extending the loopshaping technique and connecting it with notions of optimality. Chapters 9 to
12 treat controller design via optimization. The approach in these latter chapters is mathematical
rather than graphical, using elementary tools involving interpolation by analytic functions. This
mathematical approach is most useful for multivariable systems, where graphical techniques usually
break down. Nevertheless, we believe the setting of single-input/single-output systems is where this
new approach should be learned.
There are many people to whom we are grateful for their help in this book: Dale Enns for
sharing his expertise in loopshaping; Raymond Kwong and Boyd Pearson for class testing the book;
iii
and Munther Dahleh, Ciprian Foias, and Karen Rudie for reading earlier drafts. Numerous Caltech
students also struggled with various versions of this material: Gary Balas, Carolyn Beck, Bobby
Bodenheimer, and Roy Smith had particularly helpful suggestions. Finally, we would like to thank
the AFOSR, ARO, NSERC, NSF, and ONR for partial financial support during the writing of this
book.
iv
Chapter 1
Introduction
Without control systems there could be no manufacturing, no vehicles, no computers, no regulated
environment—in short, no technology. Control systems are what make machines, in the broadest
sense of the term, function as intended. Control systems are most often based on the principle
of feedback, whereby the signal to be controlled is compared to a desired reference signal and the
discrepancy used to compute corrective control action. The goal of this book is to present a theory
of feedback control system design that captures the essential issues, can be applied to a wide range
of practical problems, and is as simple as possible.
1.1
Issues in Control System Design
The process of designing a control system generally involves many steps. A typical scenario is as
follows:
1. Study the system to be controlled and decide what types of sensors and actuators will be used
and where they will be placed.
2. Model the resulting system to be controlled.
3. Simplify the model if necessary so that it is tractable.
4. Analyze the resulting model; determine its properties.
5. Decide on performance specifications.
6. Decide on the type of controller to be used.
7. Design a controller to meet the specs, if possible; if not, modify the specs or generalize the
type of controller sought.
8. Simulate the resulting controlled system, either on a computer or in a pilot plant.
9. Repeat from step 1 if necessary.
10. Choose hardware and software and implement the controller.
11. Tune the controller on-line if necessary.
1
2
CHAPTER 1. INTRODUCTION
It must be kept in mind that a control engineer’s role is not merely one of designing control
systems for fixed plants, of simply “wrapping a little feedback” around an already fixed physical
system. It also involves assisting in the choice and configuration of hardware by taking a systemwide view of performance. For this reason it is important that a theory of feedback not only lead
to good designs when these are possible, but also indicate directly and unambiguously when the
performance objectives cannot be met.
It is also important to realize at the outset that practical problems have uncertain, nonminimum-phase plants (non-minimum-phase means the existence of right half-plane zeros, so the
inverse is unstable); that there are inevitably unmodeled dynamics that produce substantial uncertainty, usually at high frequency; and that sensor noise and input signal level constraints limit
the achievable benefits of feedback. A theory that excludes some of these practical issues can
still be useful in limited application domains. For example, many process control problems are so
dominated by plant uncertainty and right half-plane zeros that sensor noise and input signal level
constraints can be neglected. Some spacecraft problems, on the other hand, are so dominated by
tradeoffs between sensor noise, disturbance rejection, and input signal level (e.g., fuel consumption)
that plant uncertainty and non-minimum-phase effects are negligible. Nevertheless, any general
theory should be able to treat all these issues explicitly and give quantitative and qualitative results
about their impact on system performance.
In the present section we look at two issues involved in the design process: deciding on performance specifications and modeling. We begin with an example to illustrate these two issues.
Example A very interesting engineering system is the Keck astronomical telescope, currently
under construction on Mauna Kea in Hawaii. When completed it will be the world’s largest. The
basic objective of the telescope is to collect and focus starlight using a large concave mirror. The
shape of the mirror determines the quality of the observed image. The larger the mirror, the more
light that can be collected, and hence the dimmer the star that can be observed. The diameter of
the mirror on the Keck telescope will be 10 m. To make such a large, high-precision mirror out of
a single piece of glass would be very difficult and costly. Instead, the mirror on the Keck telescope
will be a mosaic of 36 hexagonal small mirrors. These 36 segments must then be aligned so that
the composite mirror has the desired shape.
The control system to do this is illustrated in Figure 1.1. As shown, the mirror segments
are subject to two types of forces: disturbance forces (described below) and forces from actuators.
Behind each segment are three piston-type actuators, applying forces at three points on the segment
to effect its orientation. In controlling the mirror’s shape, it suffices to control the misalignment
between adjacent mirror segments. In the gap between every two adjacent segments are (capacitortype) sensors measuring local displacements between the two segments. These local displacements
are stacked into the vector labeled y; this is what is to be controlled. For the mirror to have the
ideal shape, these displacements should have certain ideal values that can be pre-computed; these
are the components of the vector r. The controller must be designed so that in the closed-loop
system y is held close to r despite the disturbance forces. Notice that the signals are vector valued.
Such a system is multivariable.
Our uncertainty about the plant arises from disturbance sources:
• As the telescope turns to track a star, the direction of the force of gravity on the mirror
changes.
• During the night, when astronomical observations are made, the ambient temperature changes.
1.1. ISSUES IN CONTROL SYSTEM DESIGN
3
disturbance forces
?
r
-
controller
u
-
actuators
-
mirror
segments
y
6
sensors
Figure 1.1: Block diagram of Keck telescope control system.
• The telescope is susceptible to wind gusts.
and from uncertain plant dynamics:
• The dynamic behavior of the components—mirror segments, actuators, sensors—cannot be
modeled with infinite precision.
Now we continue with a discussion of the issues in general.
Control Objectives
Generally speaking, the objective in a control system is to make some output, say y, behave in a
desired way by manipulating some input, say u. The simplest objective might be to keep y small
(or close to some equilibrium point)—a regulator problem—or to keep y − r small for r, a reference
or command signal, in some set—a servomechanism or servo problem. Examples:
• On a commercial airplane the vertical acceleration should be less than a certain value for
passenger comfort.
• In an audio amplifier the power of noise signals at the output must be sufficiently small for
high fidelity.
• In papermaking the moisture content must be kept between prescribed values.
There might be the side constraint of keeping u itself small as well, because it might be constrained
(e.g., the flow rate from a valve has a maximum value, determined when the valve is fully open)
or it might be too expensive to use a large input. But what is small for a signal? It is natural to
introduce norms for signals; then “y small” means “kyk small.” Which norm is appropriate depends
on the particular application.
In summary, performance objectives of a control system naturally lead to the introduction of
norms; then the specs are given as norm bounds on certain key signals of interest.
4
CHAPTER 1. INTRODUCTION
Models
Before discussing the issue of modeling a physical system it is important to distinguish among four
different objects:
1. Real physical system: the one “out there.”
2. Ideal physical model: obtained by schematically decomposing the real physical system into
ideal building blocks; composed of resistors, masses, beams, kilns, isotropic media, Newtonian
fluids, electrons, and so on.
3. Ideal mathematical model: obtained by applying natural laws to the ideal physical model;
composed of nonlinear partial differential equations, and so on.
4. Reduced mathematical model: obtained from the ideal mathematical model by linearization,
lumping, and so on; usually a rational transfer function.
Sometimes language makes a fuzzy distinction between the real physical system and the ideal
physical model. For example, the word resistor applies to both the actual piece of ceramic and
metal and the ideal object satisfying Ohm’s law. Of course, the adjectives real and ideal could be
used to disambiguate.
No mathematical system can precisely model a real physical system; there is always uncertainty.
Uncertainty means that we cannot predict exactly what the output of a real physical system will
be even if we know the input, so we are uncertain about the system. Uncertainty arises from two
sources: unknown or unpredictable inputs (disturbance, noise, etc.) and unpredictable dynamics.
What should a model provide? It should predict the input-output response in such a way that
we can use it to design a control system, and then be confident that the resulting design will work
on the real physical system. Of course, this is not possible. A “leap of faith” will always be required
on the part of the engineer. This cannot be eliminated, but it can be made more manageable with
the use of effective modeling, analysis, and design techniques.
Mathematical Models in This Book
The models in this book are finite-dimensional, linear, and time-invariant. The main reason for this
is that they are the simplest models for treating the fundamental issues in control system design.
The resulting design techniques work remarkably well for a large class of engineering problems,
partly because most systems are built to be as close to linear time-invariant as possible so that they
are more easily controlled. Also, a good controller will keep the system in its linear regime. The
uncertainty description is as simple as possible as well.
The basic form of the plant model in this book is
y = (P + ∆)u + n.
Here y is the output, u the input, and P the nominal plant transfer function. The model uncertainty
comes in two forms:
n:
∆:
unknown noise or disturbance
unknown plant perturbation
1.1. ISSUES IN CONTROL SYSTEM DESIGN
5
Both n and ∆ will be assumed to belong to sets, that is, some a priori information is assumed
about n and ∆. Then every input u is capable of producing a set of outputs, namely, the set of
all outputs (P + ∆)u + n as n and ∆ range over their sets. Models capable of producing sets of
outputs for a single input are said to be nondeterministic. There are two main ways of obtaining
models, as described next.
Models from Science
The usual way of getting a model is by applying the laws of physics, chemistry, and so on. Consider
the Keck telescope example. One can write down differential equations based on physical principles
(e.g., Newton’s laws) and making idealizing assumptions (e.g., the mirror segments are rigid). The
coefficients in the differential equations will depend on physical constants, such as masses and
physical dimensions. These can be measured. This method of applying physical laws and taking
measurements is most successful in electromechanical systems, such as aerospace vehicles and robots.
Some systems are difficult to model in this way, either because they are too complex or because
their governing laws are unknown.
Models from Experimental Data
The second way of getting a model is by doing experiments on the physical system. Let’s start
with a simple thought experiment, one that captures many essential aspects of the relationships
between physical systems and their models and the issues in obtaining models from experimental
data. Consider a real physical system—the plant to be controlled—with one input, u, and one
output, y. To design a control system for this plant, we must understand how u affects y.
The experiment runs like this. Suppose that the real physical system is in a rest state before
an input u is applied (i.e., u = y = 0). Now apply some input signal u, resulting in some output
signal y. Observe the pair (u, y). Repeat this experiment several times. Pretend that these data
pairs are all we know about the real physical system. (This is the black box scenario. Usually, we
know something about the internal workings of the system.)
After doing this experiment we will notice several things. First, the same input signal at different
times produces different output signals. Second, if we hold u = 0, y will fluctuate in an unpredictable
manner. Thus the real physical system produces just one output for any given input, so it itself
is deterministic. However, we observers are uncertain because we cannot predict what that output
will be.
Ideally, the model should cover the data in the sense that it should be capable of producing
every experimentally observed input-output pair. (Of course, it would be better to cover not just
the data observed in a finite number of experiments, but anything that can be produced by the real
physical system. Obviously, this is impossible.) If nondeterminism that reasonably covers the range
of expected data is not built into the model, we will not trust that designs based on such models
will work on the real system.
In summary, for a useful theory of control design, plant models must be nondeterministic, having
uncertainty built in explicitly.
Synthesis Problem
A synthesis problem is a theoretical problem, precise and unambiguous. Its purpose is primarily
pedagogical: It gives us something clear to focus on for the purpose of study. The hope is that
6
CHAPTER 1. INTRODUCTION
the principles learned from studying a formal synthesis problem will be useful when it comes to
designing a real control system.
The most general block diagram of a control system is shown in Figure 1.2. The generalized plant
w
z
-
-
generalized
plant
y
u
controller
Figure 1.2: Most general control system.
consists of everything that is fixed at the start of the control design exercise: the plant, actuators
that generate inputs to the plant, sensors measuring certain signals, analog-to-digital and digitalto-analog converters, and so on. The controller consists of the designable part: it may be an electric
circuit, a programmable logic controller, a general-purpose computer, or some other such device.
The signals w, z, y, and u are, in general, vector-valued functions of time. The components of w
are all the exogenous inputs: references, disturbances, sensor noises, and so on. The components of
z are all the signals we wish to control: tracking errors between reference signals and plant outputs,
actuator signals whose values must be kept between certain limits, and so on. The vector y contains
the outputs of all sensors. Finally, u contains all controlled inputs to the generalized plant. (Even
open-loop control fits in; the generalized plant would be so defined that y is always constant.)
Very rarely is the exogenous input w a fixed, known signal. One of these rare instances is where
a robot manipulator is required to trace out a definite path, as in welding. Usually, w is not fixed
but belongs to a set that can be characterized to some degree. Some examples:
• In a thermostat-controlled temperature regulator for a house, the reference signal is always
piecewise constant: at certain times during the day the thermostat is set to a new value. The
temperature of the outside air is not piecewise constant but varies slowly within bounds.
• In a vehicle such as an airplane or ship the pilot’s commands on the steering wheel, throttle,
pedals, and so on come from a predictable set, and the gusts and wave motions have amplitudes
and frequencies that can be bounded with some degree of confidence.
• The load power drawn on an electric power system has predictable characteristics.
Sometimes the designer does not attempt to model the exogenous inputs. Instead, she or he
designs for a suitable response to a test input, such as a step, a sinusoid, or white noise. The
designer may know from past experience how this correlates with actual performance in the field.
Desired properties of z generally relate to how large it is according to various measures, as discussed
above.
1.2. WHAT IS IN THIS BOOK
7
Finally, the output of the design exercise is a mathematical model of a controller. This must
be implementable in hardware. If the controller you design is governed by a nonlinear partial
differential equation, how are you going to implement it? A linear ordinary differential equation
with constant coefficients, representing a finite-dimensional, time-invariant, linear system, can be
simulated via an analog circuit or approximated by a digital computer, so this is the most common
type of control law.
The synthesis problem can now be stated as follows: Given a set of generalized plants, a set
of exogenous inputs, and an upper bound on the size of z, design an implementable controller to
achieve this bound. How the size of z is to be measured (e.g., power or maximum amplitude)
depends on the context. This book focuses on an elementary version of this problem.
1.2
What Is in This Book
Since this book is for a first course on this subject, attention is restricted to systems whose models
are single-input/single-output, finite-dimensional, linear, and time-invariant. Thus they have transfer functions that are rational in the Laplace variable s. The general layout of the book is that
Chapters 2 to 4 and 6 are devoted to analysis of control systems, that is, the controller is already
specified, and Chapters 5 and 7 to 12 to design.
Performance of a control system is specified in terms of the size of certain signals of interest. For
example, the performance of a tracking system could be measured by the size of the error signal.
Chapter 2, Norms for Signals and Systems, looks at several ways of defining norms for a signal u(t);
in particular, the 2-norm (associated with energy),
Z
∞
2
u(t) dt
−∞
1/2
,
the ∞-norm (maximum absolute value),
max |u(t)|,
t
and the square root of the average power (actually, not quite a norm),
1
lim
T →∞ 2T
Z
T
2
u(t) dt
−T
1/2
.
Also introduced are two norms for a system’s transfer function G(s): the 2-norm,
kGk2 :=
1
2π
Z
∞
−∞
2
|G(jω)| dω
1/2
,
and the ∞-norm,
kGk∞ := max |G(jω)|.
ω
Notice that kGk∞ equals the peak amplitude on the Bode magnitude plot of G. Then two very
useful tables are presented summarizing input-output norm relationships. For example, one table
gives a bound on the 2-norm of the output knowing the 2-norm of the input and the ∞-norm of the
8
CHAPTER 1. INTRODUCTION
r
−6
-
d
e
-
C
u
-
?
-
y
P
-
?
n
Figure 1.3: Single-loop feedback system.
transfer function. Such results are very useful in predicting, for example, the effect a disturbance
will have on the output of a feedback system.
Chapters 3 and 4 are the most fundamental in the book. The system under consideration is
shown in Figure 1.3, where P and C are the plant and controller transfer functions. The signals are
as follows:
r
e
u
d
y
n
reference or command input
tracking error
control signal, controller output
plant disturbance
plant output
sensor noise
In Chapter 3, Basic Concepts, internal stability is defined and characterized. Then the system is
analyzed for its ability to track a single reference signal r—a step or a ramp—asymptotically as
time increases. Finally, we look at tracking a set of reference signals. The transfer function from
reference input r to tracking error e is denoted S, the sensitivity function. It is argued that a useful
tracking performance criterion is kW1 Sk∞ < 1, where W1 is a transfer function which can be tuned
by the control system designer.
Since no mathematical system can exactly model a physical system, we must be aware of how
modeling errors might adversely affect the performance of a control system. Chapter 4, Uncertainty
and Robustness, begins with a treatment of various models of plant uncertainty. The basic technique
is to model the plant as belonging to a set P. Such a set can be either structured—for example,
there are a finite number of uncertain parameters—or unstructured—the frequency response lies in
a set in the complex plane for every frequency. For us, unstructured is more important because it
leads to a simple and useful design theory. In particular, multiplicative perturbation is chosen for
detailed study, it being typical. In this uncertainty model there is a nominal plant P and the family
P consists of all perturbed plants P̃ such that at each frequency ω the ratio P̃ (jω)/P (jω) lies in a
disk in the complex plane with center 1. This notion of disk-like uncertainty is key; because of it
the mathematical problems are tractable.
Generally speaking, the notion of robustness means that some characteristic of the feedback
system holds for every plant in the set P. A controller C provides robust stability if it provides
internal stability for every plant in P. Chapter 4 develops a test for robust stability for the multiplicative perturbation model, a test involving C and P. The test is kW2 T k∞ < 1. Here T is the
1.2. WHAT IS IN THIS BOOK
9
complementary sensitivity function, equal to 1 − S (or the transfer function from r to y), and W2
is a transfer function whose magnitude at frequency ω equals the radius of the uncertainty disk at
that frequency.
The final topic in Chapter 4 is robust performance, guaranteed tracking in the face of plant
uncertainty. The main result is that the tracking performance spec kW1 Sk∞ < 1 is satisfied for all
plants in the multiplicative perturbation set if and only if the magnitude of |W1 S| + |W2 T | is less
than 1 for all frequencies, that is,
k|W1 S| + |W2 T |k∞ < 1.
(1.1)
This is an analysis result: It tells exactly when some candidate controller provides robust performance.
Chapter 5, Stabilization, is the first on design. Most synthesis problems can be formulated like
this: Given P , design C so that the feedback system (1) is internally stable, and (2) acquires some
additional desired property or properties, for example, the output y asymptotically tracks a step
input r. The method of solution presented here is to parametrize all Cs for which (1) is true and
then to find a parameter for which (2) holds. In this chapter such a parametrization is derived; it
has the form
C=
X + MQ
,
Y − NQ
where N , M , X, and Y are fixed stable proper transfer functions and Q is the parameter, an
arbitrary stable proper transfer function. The usefulness of this parametrization derives from the
fact that all closed-loop transfer functions are very simple functions of Q; for instance, the sensitivity
function S, while a nonlinear function of C, equals simply M Y − M N Q. This parametrization
is then applied to three problems: achieving asymptotic performance specs, such as tracking a
step; internal stabilization by a stable controller; and simultaneous stabilization of two plants by a
common controller.
Before we see how to design control systems for the robust performance specification, it is
important to understand the basic limitations on achievable performance: Why can’t we achieve
both arbitrarily good performance and stability robustness at the same time? In Chapter 6, Design
Constraints, we study design constraints arising from two sources: from algebraic relationships that
must hold among various transfer functions and from the fact that closed-loop transfer functions
must be stable, that is, analytic in the right half-plane. The main conclusion is that feedback control
design always involves a tradeoff between performance and stability robustness.
Chapter 7, Loopshaping, presents a graphical technique for designing a controller to achieve
robust performance. This method is the most common in engineering practice. It is especially
suitable for today’s CAD packages in view of their graphics capabilities. The loop transfer function
is L := P C. The idea is to shape the Bode magnitude plot of L so that (1.1) is achieved, at
least approximately, and then to back-solve for C via C = L/P . When P or P −1 is not stable, L
must contain P s unstable poles and zeros (for internal stability of the feedback loop), an awkward
constraint. For this reason, it is assumed in Chapter 7 that P and P −1 are both stable.
Thus Chapters 2 to 7 constitute a basic treatment of feedback design, containing a detailed
formulation of the control design problem, the fundamental issue of performance/stability robustness
tradeoff, and a graphical design technique suitable for benign plants (stable, minimum-phase).
Chapters 8 to 12 are more advanced.
10
CHAPTER 1. INTRODUCTION
Chapter 8, Advanced Loopshaping, is a bridge between the two halves of the book; it extends the
loopshaping technique and connects it with the notion of optimal designs. Loopshaping in Chapter 7
focuses on L, but other quantities, such as C, S, T , or the Q parameter in the stabilization results
of Chapter 5, may also be “shaped” to achieve the same end. For many problems these alternatives
are more convenient. Chapter 8 also offers some suggestions on how to extend loopshaping to handle
right half-plane poles and zeros.
Optimal controllers are introduced in a formal way in Chapter 8. Several different notions of
optimality are considered with an aim toward understanding in what way loopshaping controllers
can be said to be optimal. It is shown that loopshaping controllers satisfy a very strong type
of optimality, called self-optimality. The implication of this result is that when loopshaping is
successful at finding an adequate controller, it cannot be improved upon uniformly.
Chapters 9 to 12 present a recently developed approach to the robust performance design problem. The approach is mathematical rather than graphical, using elementary tools involving interpolation by analytic functions. This mathematical approach is most useful for multivariable systems,
where graphical techniques usually break down. Nevertheless, the setting of single-input/singleoutput systems is where this new approach should be learned. Besides, present-day software for
control design (e.g., MATLAB and Program CC) incorporate this approach.
Chapter 9, Model Matching, studies a hypothetical control problem called the model-matching
problem: Given stable proper transfer functions T1 and T2 , find a stable transfer function Q to
minimize kT1 − T2 Qk∞ . The interpretation is this: T1 is a model, T2 is a plant, and Q is a cascade
controller to be designed so that T2 Q approximates T1 . Thus T1 − T2 Q is the error transfer function.
This problem is turned into a special interpolation problem: Given points {ai } in the right halfplane and values {bi }, also complex numbers, find a stable transfer function G so that kGk∞ < 1
and G(ai ) = bi , that is, G interpolates the value bi at the point ai . When such a G exists and how
to find one utilizes some beautiful mathematics due to Nevanlinna and Pick.
Chapter 10, Design for Performance, treats the problem of designing a controller to achieve the
performance criterion kW1 Sk∞ < 1 alone, that is, with no plant uncertainty. When does such a
controller exist, and how can it be computed? These questions are easy when the inverse of the
plant transfer function is stable. When the inverse is unstable (i.e., P is non-minimum-phase), the
questions are more interesting. The solutions presented in this chapter use model-matching theory.
The procedure is applied to designing a controller for a flexible beam. The desired performance is
given in terms of step response specs: overshoot and settling time. It is shown how to choose the
weight W1 to accommodate these time domain specs. Also treated in Chapter 10 is minimization
of the 2-norm of some closed-loop transfer function, e.g., kW1 Sk2 .
Next, in Chapter 11, Stability Margin Optimization, is considered the problem of designing a
controller whose sole purpose is to maximize the stability margin, that is, performance is ignored.
The maximum obtainable stability margin is a measure of how difficult the plant is to control.
Three measures of stability margin are treated: the ∞-norm of a multiplicative perturbation, gain
margin, and phase margin. It is shown that the problem of optimizing these stability margins can
also be reduced to a model-matching problem.
Chapter 12, Design for Robust Performance, returns to the robust performance problem of
designing a controller to achieve (1.1). Chapter 7 proposed loopshaping as a graphical method
when P and P −1 are stable. Without these assumptions loopshaping can be awkward and the
methodical procedure in this chapter can be used. Actually, (1.1) is too hard for mathematical
1.2. WHAT IS IN THIS BOOK
11
analysis, so a compromise criterion is posed, namely,
k|W1 S|2 + |W2 T |2 k∞ < 1/2.
(1.2)
Using a technique called spectral factorization, we can reduce this problem to a model-matching
problem. As an illustration, the flexible beam example is reconsidered; besides step response specs
on the tip deflection, a hard limit is placed on the plant input to prevent saturation of an amplifier.
Finally, some words about frequency-domain versus time-domain methods of design. Horowitz
(1963) has long maintained that “frequency response methods have been found to be especially
useful and transparent, enabling the designer to see the tradeoff between conflicting design factors.”
This point of view has gained much greater acceptance within the control community at large
in recent years, although perhaps it would be better to stress the importance of input-output or
operator-theoretic versus state-space methods, instead of frequency domain versus time domain.
This book focuses almost exclusively on input-output methods, not because they are ultimately
more fundamental than state-space methods, but simply for pedagogical reasons.
Notes and References
There are many books on feedback control systems. Particularly good ones are Bower and Schultheiss
(1961) and Franklin et al. (1986). Regarding the Keck telescope, see Aubrun et al. (1987, 1988).
12
CHAPTER 1. INTRODUCTION
Chapter 2
Norms for Signals and Systems
One way to describe the performance of a control system is in terms of the size of certain signals
of interest. For example, the performance of a tracking system could be measured by the size of
the error signal. This chapter looks at several ways of defining a signal’s size (i.e., at several norms
for signals). Which norm is appropriate depends on the situation at hand. Also introduced are
norms for a system’s transfer function. Then two very useful tables are developed summarizing
input-output norm relationships.
2.1
Norms for Signals
We consider signals mapping (−∞, ∞) to R. They are assumed to be piecewise continuous. Of
course, a signal may be zero for t < 0 (i.e., it may start at time t = 0).
We are going to introduce several different norms for such signals. First, recall that a norm
must have the following four properties:
(i) kuk ≥ 0
(ii) kuk = 0 ⇔ u(t) = 0,
(iii) kauk = |a|kuk,
∀t
∀a ∈ R
(iv) ku + vk ≤ kuk + kvk
The last property is the familiar triangle inequality.
1-Norm The 1-norm of a signal u(t) is the integral of its absolute value:
Z ∞
kuk1 :=
|u(t)|dt.
−∞
2-Norm The 2-norm of u(t) is
kuk2 :=
Z
∞
−∞
2
u(t) dt
1/2
.
13
14
CHAPTER 2. NORMS FOR SIGNALS AND SYSTEMS
For example, suppose that u is the current through a 1 Ω resistor. Then the instantaneous power
equals u(t)2 and the total energy equals the integral of this, namely, kuk22 . We shall generalize this
interpretation: The instantaneous power of a signal u(t) is defined to be u(t)2 and its energy is
defined to be the square of its 2-norm.
∞-Norm The ∞-norm of a signal is the least upper bound of its absolute value:
kuk∞ := sup |u(t)|.
t
For example, the ∞-norm of
(1 − e−t )1(t)
equals 1. Here 1(t) denotes the unit step function.
Power Signals The average power of u is the average over time of its instantaneous power:
Z T
1
u(t)2 dt.
lim
T →∞ 2T −T
The signal u will be called a power signal if this limit exists, and then the squareroot of the average
power will be denoted pow(u):
pow(u) :=
1
T →∞ 2T
lim
Z
T
−T
u(t)2 dt
1/2
.
Note that a nonzero signal can have zero average power, so pow is not a norm. It does, however,
have properties (i), (iii), and (iv).
Now we ask the question: Does finiteness of one norm imply finiteness of any others? There are
some easy answers:
1. If kuk2 < ∞, then u is a power signal with pow(u) = 0.
Proof Assuming that u has finite 2-norm, we get
Z T
1
1
u(t)2 dt ≤
kuk22 .
2T −T
2T
But the right-hand side tends to zero as T → ∞.
2. If u is a power signal and kuk∞ < ∞, then pow(u) ≤ kuk∞ .
Proof We have
Z T
Z T
1
2
2 1
u(t) dt ≤ kuk∞
dt = kuk2∞ .
2T −T
2T −T
Let T tend to ∞.
2.2. NORMS FOR SYSTEMS
15
pow
2
∞
1
Figure 2.1: Set inclusions.
3. If kuk1 < ∞ and kuk∞ < ∞, then kuk2 ≤ (kuk∞ kuk1 )1/2 , and hence kuk2 < ∞.
Proof
Z
∞
2
u(t) dt =
−∞
Z
∞
−∞
|u(t)||u(t)|dt ≤ kuk∞ kuk1
A Venn diagram summarizing the set inclusions is shown in Figure 2.1. Note that the set labeled
“pow” contains all power signals for which pow is finite; the set labeled “1” contains all signals of
finite 1-norm; and so on. It is instructive to get examples of functions in all the components of this
diagram (Exercise 2). For example, consider

if t ≤ 0
 0, √
u1 (t) =
1/ t, if 0 < t ≤ 1

0,
if t > 1.
This has finite 1-norm:
Z 1
1
√ dt = 2.
ku1 k1 =
t
0
Its 2-norm is infinite because the integral of 1/t is divergent over the interval [0, 1]. For the same
reason, u1 is not a power signal. Finally, u1 is not bounded, so ku1 k∞ is infinite. Therefore, u1 lives
in the bottom component in the diagram.
2.2
Norms for Systems
We consider systems that are linear, time-invariant, causal, and (usually) finite-dimensional. In the
time domain an input-output model for such a system has the form of a convolution equation,
y = G ∗ u,
16
CHAPTER 2. NORMS FOR SIGNALS AND SYSTEMS
that is,
y(t) =
Z
∞
−∞
G(t − τ )u(τ )dτ.
Causality means that G(t) = 0 for t < 0. Let Ĝ(s) denote the transfer function, the Laplace
transform of G. Then Ĝ is rational (by finite-dimensionality) with real coefficients. We say that Ĝ
is stable if it is analytic in the closed right half-plane (Re s ≥ 0), proper if Ĝ(j∞) is finite (degree
of denominator ≥ degree of numerator), strictly proper if Ĝ(j∞) = 0 (degree of denominator >
degree of numerator), and biproper if Ĝ and Ĝ−1 are both proper (degree of denominator = degree
of numerator).
We introduce two norms for the transfer function Ĝ.
2-Norm
kĜk2 :=
1
2π
Z
∞
2
−∞
|Ĝ(jω)| dω
1/2
∞-Norm
kĜk∞ := sup |Ĝ(jω)|
ω
Note that if Ĝ is stable, then by Parseval’s theorem
kĜk2 =
1
2π
Z
∞
−∞
2
|Ĝ(jω)| dω
1/2
=
Z
∞
−∞
2
|G(t)| dt
1/2
.
The ∞-norm of Ĝ equals the distance in the complex plane from the origin to the farthest point
on the Nyquist plot of Ĝ. It also appears as the peak value on the Bode magnitude plot of Ĝ. An
important property of the ∞-norm is that it is submultiplicative:
kĜĤk∞ ≤ kĜk∞ kĤk∞ .
It is easy to tell when these two norms are finite.
Lemma 1 The 2-norm of Ĝ is finite iff Ĝ is strictly proper and has no poles on the imaginary
axis; the ∞-norm is finite iff Ĝ is proper and has no poles on the imaginary axis.
Proof Assume that Ĝ is strictly proper, with no poles on the imaginary axis. Then the Bode
magnitude plot rolls off at high frequency. It is not hard to see that the plot of c/(τ s + 1) dominates
that of Ĝ for sufficiently large positive c and sufficiently small positive τ , that is,
|c/(τ jω + 1)| ≥ |Ĝ(jω)|,
∀ω.
√
But c/(τ s + 1) has finite 2-norm; its 2-norm equals c/ 2τ (how to do this computation is shown
below). Hence Ĝ has finite 2-norm.
The rest of the proof follows similar lines.
2.2. NORMS FOR SYSTEMS
17
How to Compute the 2-Norm
Suppose that Ĝ is strictly proper and has no poles on the imaginary axis (so its 2-norm is finite).
We have
Z ∞
1
2
|Ĝ(jω)|2 dω
kĜk2 =
2π −∞
Z j∞
1
=
Ĝ(−s)Ĝ(s)ds
2πj −j∞
I
1
=
Ĝ(−s)Ĝ(s)ds.
2πj
The last integral is a contour integral up the imaginary axis, then around an infinite semicircle in
the left half-plane; the contribution to the integral from this semicircle equals zero because Ĝ is
strictly proper. By the residue theorem, kĜk22 equals the sum of the residues of Ĝ(−s)Ĝ(s) at its
poles in the left half-plane.
Example 1 Take Ĝ(s) = 1/(τ s + 1), τ > 0. The left half-plane pole of Ĝ(−s)Ĝ(s) is at s = −1/τ .
The residue at this pole equals
1
1
1
1
lim
s+
=
.
τ −τ s + 1 τ s + 1
2τ
s→−1/τ
√
Hence kĜk2 = 1/ 2τ .
How to Compute the ∞-Norm
This requires a search. Set up a fine grid of frequency points,
{ω1 , . . . , ωN }.
Then an estimate for kĜk∞ is
max |Ĝ(jωk )|.
1≤k≤N
Alternatively, one could find where |Ĝ(jω)| is maximum by solving the equation
d|Ĝ|2
(jω) = 0.
dω
This derivative can be computed in closed form because Ĝ is rational. It then remains to compute
the roots of a polynomial.
Example 2 Consider
Ĝ(s) =
as + 1
bs + 1
with a, b > 0. Look at the Bode magnitude plot: For a ≥ b it is increasing (high-pass); else, it is
decreasing (low-pass). Thus
a/b, a ≥ b
kĜk∞ =
1,
a < b.
18
2.3
CHAPTER 2. NORMS FOR SIGNALS AND SYSTEMS
Input-Output Relationships
The question of interest in this section is: If we know how big the input is, how big is the output
going to be? Consider a linear system with input u, output y, and transfer function Ĝ, assumed
stable and strictly proper. The results are summarized in two tables below. Suppose that u is the
unit impulse, δ. Then the 2-norm of y equals the 2-norm of G, which by Parseval’s theorem equals
the 2-norm of Ĝ; this gives entry (1,1) in Table 2.1. The rest of the first column is for the ∞-norm
and pow, and the second column is for a sinusoidal input. The ∞ in the (1,2) entry is true as long
as Ĝ(jω) 6= 0.
u(t) = δ(t)
u(t) = sin(ωt)
kyk2
kĜk2
∞
kyk∞
kGk∞
|Ĝ(jω)|
1
√ |Ĝ(jω)|
2
pow(y)
0
Table 2.1: Output norms and pow for two inputs
Now suppose that u is not a fixed signal but that it can be any signal of 2-norm ≤ 1. It turns
out that the least upper bound on the 2-norm of the output, that is,
sup{kyk2 : kuk2 ≤ 1},
which we can call the 2-norm/2-norm system gain, equals the ∞-norm of Ĝ; this provides entry
(1,1) in Table 2.2. The other entries are the other system gains. The ∞ in the various entries is
true as long as Ĝ 6≡ 0, that is, as long as there is some ω for which Ĝ(jω) 6= 0.
kuk2
kuk∞
pow(u)
kyk2
kĜk∞
∞
∞
kyk∞
kĜk2
kGk1
∞
0
≤ kĜk∞
kĜk∞
pow(y)
Table 2.2: System Gains
A typical application of these tables is as follows. Suppose that our control analysis or design
problem involves, among other things, a requirement of disturbance attenuation: The controlled
system has a disturbance input, say u, whose effect on the plant output, say y, should be small. Let
G denote the impulse response from u to y. The controlled system will be required to be stable, so
the transfer function Ĝ will be stable. Typically, it will be strictly proper, too (or at least proper).
The tables tell us how much u affects y according to various measures. For example, if u is known
to be a sinusoid of fixed frequency (maybe u comes from a power source at 60 Hz), then the second
column of Table 2.1 gives the relative size of y according to the three measures. More commonly,
the disturbance signal will not be known a priori, so Table 2.2 will be more relevant.
2.4. POWER ANALYSIS (OPTIONAL)
19
Notice that the ∞-norm of the transfer function appears in several entries in the tables. This
norm is therefore an important measure for system performance.
Example A system with transfer function 1/(10s + 1) has a disturbance input d(t) known to have
the energy bound kdk2 ≤ 0.4. Suppose that we want to find the best estimate of the ∞-norm of
the output y(t). Table 2.2
√ says that the 2-norm/∞-norm gain equals the 2-norm of the transfer
function, which equals 1/ 20. Thus
0.4
kyk∞ ≤ √ .
20
The next two sections concern the proofs of the tables and are therefore optional.
2.4
Power Analysis (Optional)
For a power signal u define the autocorrelation function
Z T
1
Ru (τ ) := lim
u(t)u(t + τ )dt,
T →∞ 2T −T
that is, Ru (τ ) is the average value of the product u(t)u(t + τ ). Observe that
Ru (0) = pow(u)2 ≥ 0.
We must restrict our definition of a power signal to those signals for which the above limit exists
for all values of τ , not just τ = 0. For such signals we have the additional property that
|Ru (τ )| ≤ Ru (0).
Proof The Cauchy-Schwarz inequality implies that
Z
T
−T
u(t)v(t)dt ≤
Z
T
2
u(t) dt
−T
1/2 Z
T
2
v(t) dt
−T
1/2
.
Set v(t) = u(t + τ ) and multiply by 1/(2T ) to get
1
2T
Z
T
−T
u(t)u(t + τ )dt ≤
1
2T
Z
T
2
u(t) dt
−T
1/2
Now let T → ∞ to get the desired result.
Let Su denote the Fourier transform of Ru . Thus
Z ∞
Ru (τ )e−jωτ dτ,
Su (jω) =
−∞
Z ∞
1
Ru (τ ) =
Su (jω)ejωτ dω,
2π −∞
Z ∞
1
2
pow(u) = Ru (0) =
Su (jω)dω.
2π −∞
1
2T
Z
T
2
u(t + τ ) dt
−T
1/2
.
20
CHAPTER 2. NORMS FOR SIGNALS AND SYSTEMS
From the last equation we interpret Su (jω)/2π as power density. The function Su is called the
power spectral density of the signal u.
Now consider two power signals, u and v. Their cross-correlation function is
Z T
1
Ruv (τ ) := lim
u(t)v(t + τ )dt
T →∞ 2T −T
and Suv , the Fourier transform, is called their cross-power spectral density function.
We now derive some useful facts concerning a linear system with transfer function Ĝ, assumed
stable and proper, and its input u and output y.
1. Ruy = G ∗ Ru
Proof Since
Z
y(t) =
∞
−∞
G(α)u(t − α)dα
(2.1)
we have
u(t)y(t + τ ) =
Z
∞
−∞
G(α)u(t)u(t + τ − α)dα.
Thus the average value of u(t)y(t + τ ) equals
Z ∞
G(α)Ru (τ − α)dα.
−∞
2. Ry = G ∗ Grev ∗ Ru where Grev (t) := G(−t)
Proof Using (2.1) we get
Z ∞
y(t)y(t + τ ) =
G(α)y(t)u(t + τ − α)dα,
−∞
so the average value of y(t)y(t + τ ) equals
Z ∞
G(α)Ryu (τ − α)dα
−∞
(i.e., Ry = G ∗ Ryu ). Similarly, you can check that Ryu = Grev ∗ Ru .
3. Sy (jω) = |Ĝ(jω)|2 Su (jω)
Proof From the previous fact we have
Sy (jω) = Ĝ(jω)Ĝrev (jω)Su (jω),
so it remains to show that the Fourier transform of Grev equals the complex-conjugate of Ĝ(jω).
This is easy.
2.5. PROOFS FOR TABLES 2.1 AND 2.2 (OPTIONAL)
2.5
21
Proofs for Tables 2.1 and 2.2 (Optional)
Table 2.1
Entry (1,1) If u = δ, then y = G, so kyk2 = kGk2 . But by Parseval’s theorem, kGk2 = kĜk2 .
Entry (2,1) Again, since y = G.
Entry (3,1)
2
pow(y)
=
≤
=
=
Z T
1
lim
G(t)2 dt
2T 0
Z ∞
1
G(t)2 dt
lim
2T 0
1
lim
kGk22
2T
0
Entry (1,2) With the input u(t) = sin(ωt), the output is
y(t) = |Ĝ(jω)| sin[ωt + arg Ĝ(jω)].
(2.2)
The 2-norm of this signal is infinite as long as Ĝ(jω) 6= 0, that is, the system’s transfer function
does not have a zero at the frequency of excitation.
Entry (2,2) The amplitude of the sinusoid (2.2) equals |Ĝ(jω)|.
Entry (3,2) Let φ := arg Ĝ(jω). Then
Z T
1
|Ĝ(jω)|2 sin2 (ωt + φ)dt
pow(y)2 = lim
2T −T
Z T
1
= |Ĝ(jω)|2 lim
sin2 (ωt + φ)dt
2T −T
Z ωT +φ
1
= |Ĝ(jω)|2 lim
sin2 (θ)dθ
2ωT −ωT +φ
Z π
21
= |Ĝ(jω)|
sin2 (θ)dθ
π 0
1
=
|Ĝ(jω)|2 .
2
Table 2.2
Entry (1,1) First we see that kĜk∞ is an upper bound on the 2-norm/2-norm system gain:
kyk22 = kŷk22
Z ∞
1
|Ĝ(jω)|2 |û(jω)|2 dω
=
2π −∞
Z ∞
2 1
|û(jω)|2 dω
≤ kĜk∞
2π −∞
= kĜk2∞ kûk22
= kĜk2∞ kuk22 .
22
CHAPTER 2. NORMS FOR SIGNALS AND SYSTEMS
To show that kĜk∞ is the least upper bound, first choose a frequency ωo where |Ĝ(jω)| is
maximum, that is,
|Ĝ(jωo )| = kĜk∞ .
Now choose the input u so that
c, if |ω − ωo | < ǫ or |ω + ωo | < ǫ
|û(jω)| =
0, otherwise,
where ǫ is a small positive number and c is chosen so that u has unit 2-norm (i.e., c =
Then
i
1 h
kŷk22 ≈
|Ĝ(−jωo )|2 π + |Ĝ(jωo )|2 π
2π
= |Ĝ(jωo )|2
p
π/2ǫ).
= kĜk2∞ .
Entry (2,1) This is an application of the Cauchy-Schwarz inequality:
Z ∞
G(t − τ )u(τ )dτ
|y(t)| =
−∞
∞
≤
Z
−∞
2
G(t − τ ) dτ
= kGk2 kuk2
1/2 Z
∞
2
u(τ ) dτ
−∞
1/2
= kĜk2 kuk2 .
Hence
kyk∞ ≤ kĜk2 kuk2 .
To show that kĜk2 is the least upper bound, apply the input
u(t) = G(−t)/kGk2 .
Then kuk2 = 1 and |y(0)| = kGk2 , so kyk∞ ≥ kGk2 .
Entry (3,1) If kuk2 ≤ 1, then the 2-norm of y is finite [as in entry (1,1)], so pow(y) = 0.
Entry (1,2) Apply a sinusoidal input of unit amplitude and frequency ω such that jω is not a
zero of Ĝ. Then kuk∞ = 1, but kyk2 = ∞.
Entry (2,2) First, kGk1 is an upper bound on the ∞-norm/∞-norm system gain:
Z ∞
|y(t)| =
G(τ )u(t − τ )dτ
Z −∞
∞
≤
|G(τ )u(t − τ )| dτ
−∞
Z ∞
|G(τ )| dτ kuk∞
≤
−∞
= kGk1 kuk∞ .
2.5. PROOFS FOR TABLES 2.1 AND 2.2 (OPTIONAL)
23
That kGk1 is the least upper bound can be seen as follows. Fix t and set
u(t − τ ) := sgn(G(τ )),
∀τ.
Then kuk∞ = 1 and
Z ∞
G(τ )u(t − τ )dτ
y(t) =
−∞
Z ∞
=
|G(τ )|dτ
−∞
= kGk1 .
So kyk∞ ≥ kGk1 .
Entry (3,2) If u is a power signal and kuk∞ ≤ 1, then pow(u) ≤ 1, so
sup{pow(y) : kuk∞ ≤ 1} ≤ sup{pow(y) : pow(u) ≤ 1}.
We will see in entry (3,3) that the latter supremum equals kĜk∞ .
Entry (1,3) If u is a power signal, then from the preceding section,
Sy (jω) = |Ĝ(jω)|2 Su (jω),
so
1
pow(y) =
2π
2
Z
∞
−∞
|Ĝ(jω)|2 Su (jω)dω.
(2.3)
Unless |Ĝ(jω)|2 Su (jω) equals zero for all ω, pow(y) is positive, in which case its 2-norm is infinite.
Entry (2,3) This case is not so important, so a complete proof is omitted. The main idea is this:
If pow(u) ≤ 1, then pow(y) is finite but kyk∞ is not necessarily (see u8 in Exercise 2). So for a
proof of this entry, one should construct an input with pow(u) ≤ 1, but such that kyk∞ = ∞.
Entry (3,3) From (2.3) we get immediately that
pow(y) ≤ kĜk∞ pow(u).
To achieve equality, suppose that
|Ĝ(jωo )| = kĜk∞
and let the input be
√
u(t) = 2 sin(ωo t).
Then Ru (τ ) = cos(ωo τ ), so
pow(u) = Ru (0) = 1.
24
CHAPTER 2. NORMS FOR SIGNALS AND SYSTEMS
Also,
Su (jω) = π [δ(ω − ωo ) + δ(ω + ωo )] ,
so from (2.3)
1
1
|Ĝ(jωo )|2 + |Ĝ(−jωo )|2
2
2
2
= |Ĝ(jωo )|
pow(y)2 =
= kĜk2∞ .
2.6
Computing by State-Space Methods (Optional)
This book is on classical control, which is set in the frequency domain. Current widespread practice,
however, is to do computations using state-space methods. The purpose of this optional section is
to illustrate how this is done for the problem of computing the 2-norm and ∞-norm of a transfer
function. The derivation of the procedures is brief.
Consider a state-space model of the form
ẋ(t) = Ax(t) + Bu(t),
y(t) = Cx(t).
Here u(t) is the input signal and y(t) the output signal, both scalar-valued. In contrast, x(t) is a
vector-valued function with, say, n components. The dot in ẋ means take the derivative of each
component. Then A, B, C are real matrices of sizes
n × n,
n × 1,
1 × n.
The equations are assumed to hold for t ≥ 0. Take Laplace transforms with zero initial conditions
on x:
sx̂(s) = Ax̂(s) + B û(s),
ŷ(s) = C x̂(s).
Now eliminate x̂(s) to get
ŷ(s) = C(sI − A)−1 B û(s).
We conclude that the transfer function from û to ŷ is
Ĝ(s) = C(sI − A)−1 B.
This transfer function is strictly proper. [Try an example: start with some A, B, C with n = 2,
and compute Ĝ(s).]
Going the other way, from a strictly proper transfer function to a state-space model, is more
profound, but it is true that for every strictly proper transfer function Ĝ(s) there exist (A, B, C)
such that
Ĝ(s) = C(sI − A)−1 B.
2.6. COMPUTING BY STATE-SPACE METHODS (OPTIONAL)
25
From the representation
Ĝ(s) =
1
C adj(sI − A)B
det(sI − A)
it should be clear that the poles of Ĝ(s) are included in the eigenvalues of A. We say that A is
stable if all its eigenvalues lie in Re s < 0, in which case Ĝ is a stable transfer function.
Now start with the representation
Ĝ(s) = C(sI − A)−1 B
with A stable. We want to compute kĜk2 and kĜk∞ from the data (A, B, C).
The 2-Norm
Define the matrix exponential
etA := I + tA +
t2 2
A + ···
2!
just as if A were a scalar (convergence can be proved). Let a prime denote transpose and define the
matrix
Z ∞
′
L :=
etA BB ′ etA dt
0
(the integral converges because A is stable). Then L satisfies the equation
AL + LA′ + BB ′ = 0.
Proof Integrate both sides of the equation
d tA
′
′
′
e BB ′ etA = AetA BB ′ etA + etA BB ′ etA A′
dt
from 0 to ∞, noting that exp(tA) converges to 0 because A is stable, to get
−BB ′ = AL + LA′ .
In terms of L a simple formula for the 2-norm of Ĝ is
kĜk2 = (CLC ′ )1/2 .
Proof The impulse response function is
G(t) = CetA B,
t > 0.
26
CHAPTER 2. NORMS FOR SIGNALS AND SYSTEMS
Calling on Parseval we get
kĜk22 = kGk22
Z ∞
′
CetA BB ′ etA C ′ dt
=
0
Z ∞
′
= C
etA BB ′ etA dtC ′
0
= CLC ′ .
So a procedure to compute the 2-norm is as follows:
Step 1 Solve the equation
AL + LA′ + BB ′ = 0
for the matrix L.
Step 2
kĜk2 = (CLC ′ )1/2
The ∞-Norm
Computing the ∞-norm is harder; we shall have to be content with a search procedure. Define the
2n × 2n matrix
A
BB ′
.
H :=
−C ′ C −A′
Theorem 1 kĜk∞ < 1 iff H has no eigenvalues on the imaginary axis.
Proof The proof of this theorem is a bit involved, so only sufficiency is considered, and it is only
sketched.
It is not too hard to derive that
B
−1
′
.
1/[1 − Ĝ(−s)Ĝ(s)] = 1 + 0 B (sI − H)
0
Thus the poles of [1 − Ĝ(−s)Ĝ(s)]−1 are contained in the eigenvalues of H.
Assume that H has no eigenvalues on the imaginary axis. Then [1 − Ĝ(−s)Ĝ(s)]−1 has no poles
there, so 1 − Ĝ(−s)Ĝ(s) has no zeros there, that is,
|Ĝ(jω)| =
6 1,
∀ω.
Since Ĝ is strictly proper, this implies that
|Ĝ(jω)| < 1,
∀ω
(i.e., kĜk∞ < 1).
The theorem suggests this way to compute an ∞-norm: Select a positive number γ; test if
kĜk∞ < γ (i.e., if kγ −1 Ĝk∞ < 1) by calculating the eigenvalues of the appropriate matrix; increase
or decrease γ accordingly; repeat. A bisection search is quite efficient: Get upper and lower bounds
for kĜk∞ ; try γ midway between these bounds; continue.
2.6. COMPUTING BY STATE-SPACE METHODS (OPTIONAL)
27
Exercises
1. Suppose that u(t) is a continuous signal whose derivative u̇(t) is continuous too. Which of the
following qualifies as a norm for u?
sup |u̇(t)|
t
|u(0)| + sup |u̇(t)|
t
max{sup |u(t)|, sup |u̇(t)|}
t
t
sup |u(t)| + sup |u̇(t)|
t
t
2. Consider the Venn diagram in Figure 2.1. Show that the functions u1 to u9 , defined below,
are located in the diagram as shown in Figure 2.2. All the functions are zero for t < 0.
u6
u5
u4
u3
u9
u2
u8
u7
u1
Figure 2.2: Figure for Exercise 2.
u1 (t) =
u2 (t) =
√
1/ t, if t ≤ 1
0,
if t > 1
1/t1/4 , if t ≤ 1
0,
if t > 1
u3 (t) = 1
u4 (t) = 1/(1 + t)
u5 (t) = u2 (t) + u4 (t)
u6 (t) = 0
u7 (t) = u2 (t) + 1
For u8 , set
vk (t) =
k, if k < t < k + k−3
0, otherwise
28
CHAPTER 2. NORMS FOR SIGNALS AND SYSTEMS
and then
u8 (t) =
∞
X
vk (t).
1
Finally, let u9 equal 1 in the intervals
[22k , 22k+1 ],
k = 0, 1, 2, . . .
and zero elsewhere.
3. Suppose that Ĝ(s) is a real-rational, stable transfer function with Ĝ−1 stable, too (i.e., neither
poles nor zeros in Re s ≥ 0). True or false: The Bode phase plot, ∠Ĝ(jω) versus ω, can be
uniquely constructed from the Bode magnitude plot, |Ĝ(jω)| versus ω. (Answer: false!)
4. Recall that the transfer function for a pure timedelay of τ time units is
D̂(s) := e−sτ .
Say that a norm k k on transfer functions is time-delay invariant if for every transfer function
Ĝ (such that kĜk < ∞) and every τ > 0,
kD̂ Ĝk = kĜk.
Is the 2-norm or ∞-norm time-delay invariant?
5. Compute the 1-norm of the impulse response corresponding to the transfer function
1
,
τs + 1
τ > 0.
6. For Ĝ stable and strictly proper, show that kGk1 < ∞ and find an inequality relating kĜk∞
and kGk1 .
7. This concerns entry (2,2) in Table 2.2. The given entry assumes that Ĝ is stable and strictly
proper. When Ĝ is stable but only proper, it can be expressed as
Ĝ(s) = c + Ĝ1 (s)
with c constant and Ĝ1 stable and strictly proper. Show that the correct (2,2)-entry is
|c| + kG1 k1 .
8. Show that entries (2,2) and (3,2) in Table 2.1 and entries (1,1), (3,2), and (3,3) in Table 2.2
hold when Ĝ is stable and proper (instead of strictly proper).
9. Let Ĝ(s) be a strictly proper stable transfer function and G(t) its inverse Laplace transform.
Let u(t) be a signal of finite 1-norm. True or false:
kG ∗ uk1 ≤ kGk1 kuk1 ?
2.6. COMPUTING BY STATE-SPACE METHODS (OPTIONAL)
29
10. Consider a system with transfer function
ωn2
,
s2 + 2ζωn s + ωn2
ζ, ωn > 0,
and input
u(t) = sin 0.1t,
−∞ < t < ∞.
Compute pow of the output.
11. Consider a system with transfer function
s+2
4s + 1
and input u and output y. Compute
sup kyk∞
kuk∞ =1
and find an input achieving this supremum.
12. For a linear system with input u(t) and output y(t), prove that
sup kyk = sup kyk
kuk≤1
kuk=1
where the norm is, say, the 2-norm.
13. Show that the 2-norm for transfer functions is not submultiplicative.
14. Write a MATLAB program to compute the ∞-norm of a transfer function using the grid
method. Test your program on the function
1
s2 + 10−6 s + 1
and compare your answer to the exact solution computed by hand using the derivative method.
Notes and References
The material in this chapter belongs to the field of mathematics called functional analysis. Tools
from functional analysis were introduced into the subject of feedback control around 1960 by G.
Zames and I. Sandberg. Some references are Desoer and Vidyasagar (1975), Holtzman (1970), Mees
(1981), and Willems (1971). The state-space procedure for the ∞-norm is from Boyd et al. (1989).
30
CHAPTER 2. NORMS FOR SIGNALS AND SYSTEMS
Chapter 3
Basic Concepts
This chapter and the next are the most fundamental. We concentrate on the single-loop feedback
system. Stability of this system is defined and characterized. Then the system is analyzed for
its ability to track certain signals (i.e., steps and ramps) asymptotically as time increases. Finally, tracking is addressed as a performance specification. Uncertainty is postponed until the next
chapter.
Now a word about notation. In the preceding chapter we used signals in the time and frequency
domains; the notation was u(t) for a function of time and û(s) for its Laplace transform. When the
context is solely the frequency domain, it is convenient to drop the hat and write u(s); similarly for
an impulse response G(t) and the corresponding transfer function Ĝ(s).
3.1
Basic Feedback Loop
The most elementary feedback control system has three components: a plant (the object to be
controlled, no matter what it is, is always called the plant), a sensor to measure the output of the
plant, and a controller to generate the plant’s input. Usually, actuators are lumped in with the
plant. We begin with the block diagram in Figure 3.1. Notice that each of the three components
d
r
- controller
u
?
-
plant
y
-
6
v
sensor
6
n
Figure 3.1: Elementary control system.
has two inputs, one internal to the system and one coming from outside, and one output. These
31
32
CHAPTER 3. BASIC CONCEPTS
signals have the following interpretations:
r
v
u
d
y
n
reference or command input
sensor output
actuating signal, plant input
external disturbance
plant output and measured signal
sensor noise
The three signals coming from outside—r, d, and n—are called exogenous inputs.
In what follows we shall consider a variety of performance objectives, but they can be summarized
by saying that y should approximate some prespecified function of r, and it should do so in the
presence of the disturbance d, sensor noise n, with uncertainty in the plant. We may also want to
limit the size of u. Frequently, it makes more sense to describe the performance objective in terms
of the measurement v rather than y, since often the only knowledge of y is obtained from v.
The analysis to follow is done in the frequency domain. To simplify notation, hats are omitted
from Laplace transforms.
Each of the three components in Figure 3.1 is assumed to be linear, so its output is a linear
function of its input, in this case a two-dimensional vector. For example, the plant equation has
the form
d
.
y=P
u
Partitioning the 1 × 2 transfer matrix P as
P = P1 P2 ,
we get
y = P1 d + P2 u.
We shall take an even more specialized viewpoint and suppose that the outputs of the three
components are linear functions of the sums (or difference) of their inputs; that is, the plant, sensor,
and controller equations are taken to be of the form
y = P (d + u),
v = F (y + n),
u = C(r − v).
The minus sign in the last equation is a matter of tradition. The block diagram for these equations
is in Figure 3.2. Our convention is that plus signs at summing junctions are omitted.
This section ends with the notion of well-posedness. This means that in Figure 3.2 all closedloop transfer functions exist, that is, all transfer functions from the three exogenous inputs to all
internal signals, namely, u, y, v, and the outputs of the summing junctions. Label the outputs of
the summing junctions as in Figure 3.3. For well-posedness it suffices to look at the nine transfer
functions from r, d, n to x1 , x2 , x3 . (The other transfer functions are obtainable from these.) Write
the equations at the summing junctions:
x1 = r − F x3 ,
x2 = d + Cx1 ,
x3 = n + P x2 .
3.1. BASIC FEEDBACK LOOP
r
−6
-
33
d
-
C
u
-
v
?
-
y
P
?
n
F
-
Figure 3.2: Basic feedback loop.
r
x1
−6
-
d
-
C
u
-
v
?
F
x2
-
y
P
x3
-
?
n
Figure 3.3: Basic feedback loop.
In matrix form these are


 

1
0 F
x1
r
 −C
1
0   x2  =  d  .
0 −P 1
x3
n
Thus, the system is well-posed iff the above 3 × 3 matrix is nonsingular, that is, the determinant
1 + P CF is not identically equal to zero. [For instance, the system with P (s) = 1, C(s) = 1,
F (s) = −1 is not well-posed.] Then the nine transfer functions are obtained from the equation
 
x1
1
0
 x2  =  −C
1
0 −P
x3

that is,


−1 
r
F
0   d ,
n
1


1 −P F
x1
1
 C
 x2  =
1
1 + P CF
PC
P
x3


−F
r
−CF   d  .
1
n
(3.1)
A stronger notion of well-posedness that makes sense when P , C, and F are proper is that
the nine transfer functions above are proper. A necessary and sufficient condition for this is that
1 + P CF not be strictly proper [i.e., P CF (∞) 6= −1].
34
CHAPTER 3. BASIC CONCEPTS
One might argue that the transfer functions of all physical systems are strictly proper: If a
sinusoid of ever-increasing frequency is applied to a (linear, time-invariant) system, the amplitude
of the output will go to zero. This is somewhat misleading because a real system will cease to
behave linearly as the frequency of the input increases. Furthermore, our transfer functions will be
used to parametrize an uncertainty set, and as we shall see, it may be convenient to allow some of
them to be only proper. A proportional-integral-derivative controller is very common in practice,
especially in chemical engineering. It has the form
k1 +
k2
+ k3 s.
s
This is not proper, but it can be approximated over any desired frequency range by a proper one,
for example,
k1 +
k3 s
k2
+
.
s
τs + 1
Notice that the feedback system is automatically well-posed, in the stronger sense, if P , C, and
F are proper and one is strictly proper. For most of the book, we shall make the following standing
assumption, under which the nine transfer functions in (3.1) are proper:
P is strictly proper, C and F are proper.
However, at times it will be convenient to require only that P be proper. In this case we shall always
assume that |P CF | < 1 at ω = ∞, which ensures that 1 + P CF is not strictly proper. Given that
no model, no matter how complex, can approximate a real system at sufficiently high frequencies,
we should be very uncomfortable if |P CF | > 1 at ω = ∞, because such a controller would almost
surely be unstable if implemented on a real system.
3.2
Internal Stability
Consider a system with input u, output y, and transfer function Ĝ, assumed stable and proper. We
can write
Ĝ = G0 + Ĝ1 ,
where G0 is a constant and Ĝ1 is strictly proper.
Example:
s
1
=1−
.
s+1
s+1
In the time domain the equation is
Z ∞
y(t) = G0 u(t) +
G1 (t − τ )u(τ ) dτ.
−∞
If |u(t)| ≤ c for all t, then
Z ∞
|y(t)| ≤ |G0 |c +
|G1 (τ )| dτ c.
−∞
3.2. INTERNAL STABILITY
35
The right-hand side is finite. Thus the output is bounded whenever the input is bounded. [This
argument is the basis for entry (2,2) in Table 2.2.]
If the nine transfer functions in (3.1) are stable, then the feedback system is said to be internally
stable. As a consequence, if the exogenous inputs are bounded in magnitude, so too are x1 , x2 , and
x3 , and hence u, y, and v. So internal stability guarantees bounded internal signals for all bounded
exogenous signals.
The idea behind this definition of internal stability is that it is not enough to look only at
input-output transfer functions, such as from r to y, for example. This transfer function could be
stable, so that y is bounded when r is, and yet an internal signal could be unbounded, probably
causing internal damage to the physical system.
For the remainder of this section hats are dropped.
Example In Figure 3.3 take
C(s) =
s−1
,
s+1
P (s) =
s2
1
,
−1
F (s) = 1.
Check that the transfer function from r to y is stable, but that from d to y is not. The feedback
system is therefore not internally stable. As we will see later, this offense is caused by the cancellation
of the controller zero and the plant pole at the point s = 1.
We shall develop a test for internal stability which is easier than examining nine transfer functions. Write P , C, and F as ratios of coprime polynomials (i.e., polynomials with no common
factors):
P =
NP
,
MP
C=
NC
,
MC
F =
NF
.
MF
The characteristic polynomial of the feedback system is the one formed by taking the product of
the three numerators plus the product of the three denominators:
NP NC NF + MP MC MF .
The closed-loop poles are the zeros of the characteristic polynomial.
Theorem 1 The feedback system is internally stable iff there are no closed-loop poles in Res ≥ 0.
Proof For simplicity assume that F = 1; the proof in the general case is similar, but a bit messier.
From (3.1) we have





1 −P −1
r
x1
1
 C
 x2  =
1 −C   d  .
1 + PC
PC P
1
n
x3
Substitute in the ratios and clear fractions to get



x1
MP MC −NP MC
1
 x2  =
 MP NC MP MC
NP NC + MP MC
x3
NP NC
NP MC


−MP MC
r
−MP NC   d  .
MP MC
n
(3.2)
36
CHAPTER 3. BASIC CONCEPTS
Note that the characteristic polynomial equals NP NC + MP MC . Sufficiency is now evident; the
feedback system is internally stable if the characteristic polynomial has no zeros in Res ≥ 0.
Necessity involves a subtle point. Suppose that the feedback system is internally stable. Then
all nine transfer functions in (3.2) are stable, that is, they have no poles in Re s ≥ 0. But we cannot
immediately conclude that the polynomial NP NC + MP MC has no zeros in Res ≥ 0 because this
polynomial may conceivably have a right half-plane zero which is also a zero of all nine numerators
in (3.2), and hence is canceled to form nine stable transfer functions. However, the characteristic
polynomial has no zero which is also a zero of all nine numerators, MP MC , NP MC , and so on.
Proof of this statement is left as an exercise. (It follows from the fact that we took coprime factors
to start with, that is, NP and MP are coprime, as are the other numerator-denominator pairs.)
By Theorem 1 internal stability can be determined simply by checking the zeros of a polynomial.
There is another test that provides additional insight.
Theorem 2 The feedback system is internally stable iff the following two conditions hold:
(a) The transfer function 1 + P CF has no zeros in Res ≥ 0.
(b) There is no pole-zero cancellation in Res ≥ 0 when the product P CF is formed.
Proof Recall that the feedback system is internally stable iff all nine transfer functions


1 −P F −F
1
 C
1
−CF 
1 + P CF
PC
P
1
are stable.
(⇒) Assume that the feedback system is internally stable. Then in particular (1 + P CF )−1 is
stable (i.e., it has no poles in Res ≥ 0). Hence 1 + P CF has no zeros there. This proves (a).
To prove (b), write P, C, F as ratios of coprime polynomials:
P =
NP
,
MP
C=
NC
,
MC
F =
NF
.
MF
By Theorem 1 the characteristic polynomial
NP NC NF + MP MC MF
has no zeros in Res ≥ 0. Thus the pair (NP , MC ) have no common zero in Res ≥ 0, and similarly
for the other numerator-denominator pairs.
(⇐) Assume (a) and (b). Factor P, C, F as above, and let s0 be a zero of the characteristic
polynomial, that is,
(NP NC NF + MP MC MF )(s0 ) = 0.
We must show that Res0 < 0; this will prove internal stability by Theorem 1. Suppose to the
contrary that Res0 ≥ 0. If
(MP MC MF )(s0 ) = 0,
3.3. ASYMPTOTIC TRACKING
37
then
(NP NC NF )(s0 ) = 0.
But this violates (b). Thus
(MP MC MF )(s0 ) 6= 0,
so we can divide by it above to get
1+
NP NC NF
(s0 ) = 0,
MP MC MF
that is,
1 + (P CF )(s0 ) = 0,
which violates (a).
Finally, let us recall for later use the Nyquist stability criterion. It can be derived from Theorem 2
and the principle of the argument. Begin with the curve D in the complex plane: It starts at the
origin, goes up the imaginary axis, turns into the right half-plane following a semicircle of infinite
radius, and comes up the negative imaginary axis to the origin again:
D
As a point s makes one circuit around this curve, the point P (s)C(s)F (s) traces out a curve called
the Nyquist plot of P CF . If P CF has a pole on the imaginary axis, then D must have a small
indentation to avoid it.
Nyquist Criterion Construct the Nyquist plot of P CF , indenting to the left around poles on the
imaginary axis. Let n denote the total number of poles of P , C, and F in Res ≥ 0. Then the feedback
system is internally stable iff the Nyquist plot does not pass through the point -1 and encircles it
exactly n times counterclockwise.
38
CHAPTER 3. BASIC CONCEPTS
r
−6
-
d
e
-
u
Ĉ
-
?
-
y
P̂
-
?
n
Figure 3.4: Unity-feedback loop.
3.3
Asymptotic Tracking
In this section we look at a typical performance specification, perfect asymptotic tracking of a
reference signal. Both time domain and frequency domain occur, so the notation distinction is
required.
For the remainder of this chapter we specialize to the unity-feedback case, F̂ = 1, so the block
diagram is as in Figure 3.4. Here e is the tracking error; with n = d = 0, e equals the reference
input (ideal response), r, minus the plant output (actual response), y.
We wish to study this system’s capability of tracking certain test inputs asymptotically as time
tends to infinity. The two test inputs are the step
c, if t ≥ 0
r(t) =
0, if t < 0
and the ramp
ct, if t ≥ 0
r(t) =
0, if t < 0
(c is a nonzero real number). As an application of the former think of the temperature-control
thermostat in a room; when you change the setting on the thermostat (step input), you would like
the room temperature eventually to change to the new setting (of course, you would like the change
to occur within a reasonable time). A situation with a ramp input is a radar dish designed to track
orbiting satellites. A satellite moving in a circular orbit at constant angular velocity sweeps out an
angle that is approximately a linear function of time (i.e., a ramp).
Define the loop transfer function L̂ := P̂ Ĉ. The transfer function from reference input r to
tracking error e is
Ŝ :=
1
1 + L̂
,
called the sensitivity function—more on this in the next section. The ability of the system to track
steps and ramps asymptotically depends on the number of zeros of Ŝ at s = 0.
Theorem 3 Assume that the feedback system is internally stable and n = d = 0.
3.3. ASYMPTOTIC TRACKING
39
(a) If r is a step, then e(t) −→ 0 as t −→ ∞ iff Ŝ has at least one zero at the origin.
(b) If r is a ramp, then e(t) −→ 0 as t −→ ∞ iff Ŝ has at least two zeros at the origin.
The proof is an application of the final-value theorem: If ŷ(s) is a rational Laplace transform
that has no poles in Res ≥ 0 except possibly a simple pole at s = 0, then limt→∞ y(t) exists and it
equals lims→0 sŷ(s).
Proof (a) The Laplace transform of the foregoing step is r̂(s) = c/s. The transfer function from r
to e equals Ŝ, so
c
ê(s) = Ŝ(s) .
s
Since the feedback system is internally stable, Ŝ is a stable transfer function. It follows from the
final-value theorem that e(t) does indeed converge as t −→ ∞, and its limit is the residue of the
function ê(s) at the pole s = 0:
e(∞) = Ŝ(0)c.
The right-hand side equals zero iff Ŝ(0) = 0.
(b) Similarly with r̂(s) = c/s2 .
Note that Ŝ has a zero at s = 0 iff L̂ has a pole there. Thus, from the theorem we see that if
the feedback system is internally stable and either P̂ or Ĉ has a pole at the origin (i.e., an inherent
integrator), then the output y(t) will asymptotically track any step input r.
Example To see how this works, take the simplest possible example,
P̂ (s) =
1
,
s
Ĉ(s) = 1.
Then the transfer function from r to e equals
1
s
=
.
1 + s−1
s+1
So the open-loop pole at s = 0 becomes a closed-loop zero of the error transfer function; then this
zero cancels the pole of r̂(s), resulting in no unstable poles in ê(s). Similar remarks apply for a
ramp input.
Theorem 3 is a special case of an elementary principle: For perfect asymptotic tracking, the
loop transfer function L̂ must contain an internal model of the unstable poles of r̂.
A similar analysis can be done for the situation where r = n = 0 and d is a sinusoid, say
d(t) = sin(ωt)1(t) (1 is the unit step). You can show this: If the feedback system is internally
stable, then y(t) −→ 0 as t −→ ∞ iff either P̂ has a zero at s = jω or Ĉ has a pole at s = jω
(Exercise 3).
40
3.4
CHAPTER 3. BASIC CONCEPTS
Performance
In this section we again look at tracking a reference signal, but whereas in the preceding section
we considered perfect asymptotic tracking of a single signal, we will now consider a set of reference
signals and a bound on the steady-state error. This performance objective will be quantified in
terms of a weighted norm bound.
As before, let L denote the loop transfer function, L := P C. The transfer function from reference
input r to tracking error e is
S :=
1
,
1+L
called the sensitivity function. In the analysis to follow, it will always be assumed that the feedback
system is internally stable, so S is a stable, proper transfer function. Observe that since L is strictly
proper (since P is), S(j∞) = 1.
The name sensitivity function comes from the following idea. Let T denote the transfer function
from r to y:
T =
PC
.
1 + PC
One way to quantify how sensitive T is to variations in P is to take the limiting ratio of a relative
perturbation in T (i.e., ∆T /T ) to a relative perturbation in P (i.e., ∆P/P ). Thinking of P as a
variable and T as a function of it, we get
lim
∆P →0
dT P
∆T /T
=
.
∆P/P
dP T
The right-hand side is easily evaluated to be S. In this way, S is the sensitivity of the closed-loop
transfer function T to an infinitesimal perturbation in P .
Now we have to decide on a performance specification, a measure of goodness of tracking. This
decision depends on two things: what we know about r and what measure we choose to assign to
the tracking error. Usually, r is not known in advance—few control systems are designed for one
and only one input. Rather, a set of possible rs will be known or at least postulated for the purpose
of design.
Let’s first consider sinusoidal inputs. Suppose that r can be any sinusoid of amplitude ≤ 1 and
we want e to have amplitude < ǫ. Then the performance specification can be expressed succinctly
as
kSk∞ < ǫ.
Here we used Table 2.1: the maximum amplitude of e equals the ∞-norm of the transfer function.
Or if we define the (trivial, in this case) weighting function W1 (s) = 1/ǫ, then the performance
specification is kW1 Sk∞ < 1.
The situation becomes more realistic and more interesting with a frequency-dependent weighting
function. Assume that W1 (s) is real-rational; you will see below that only the magnitude of W1 (jω)
is relevant, so any poles or zeros in Res > 0 can be reflected into the left half-plane without changing
the magnitude. Let us consider four scenarios giving rise to an ∞-norm bound on W1 S. The first
three require W1 to be stable.
3.4. PERFORMANCE
41
1. Suppose that the family of reference inputs is all signals of the form r = W1 rpf , where rpf , a
pre-filtered input, is any sinusoid of amplitude ≤ 1. Thus the set of rs consists of sinusoids
with frequency-dependent amplitudes. Then the maximum amplitude of e equals kW1 Sk∞ .
2. Recall from Chapter 2 that
Z ∞
1
2
krk2 =
|r(jω)|2 dω
2π −∞
and that krk22 is a measure of the energy of r. Thus we may think of |r(jω)|2 as energy spectral
density, or energy spectrum. Suppose that the set of all rs is
{r : r = W1 rpf , krpf k2 ≤ 1},
that is,
Z ∞
1
2
r:
|r(jω)/W1 (jω)| dω ≤ 1 .
2π −∞
Thus, r has an energy constraint and its energy spectrum is weighted by 1/|W1 (jω)|2 . For
example, if W1 were a bandpass filter, the energy spectrum of r would be confined to the
passband. More generally, W1 could be used to shape the energy spectrum of the expected
class of reference inputs. Now suppose that the tracking error measure is the 2-norm of e.
Then from Table 2.2,
sup kek2 = sup{kSW1 rpf k2 : krpf k2 ≤ 1} = kW1 Sk∞ ,
r
so kW1 Sk∞ < 1 means that kek2 < 1 for all rs in the set above .
3. This scenario is like the preceding one except for signals of finite power. We see from Table 2.2
that kW1 Sk∞ equals the supremum of pow(e) over all rpf with pow(rpf ) ≤ 1. So W1 could
be used to shape the power spectrum of the expected class of rs.
4. In several applications, for example aircraft flight-control design, designers have acquired
through experience desired shapes for the Bode magnitude plot of S. In particular, suppose
that good performance is known to be achieved if the plot of |S(jω)| lies under some curve.
We could rewrite this as
|S(jω)| < |W1 (jω)|−1 ,
∀ω,
or in other words, kW1 Sk∞ < 1.
There is a nice graphical interpretation of the norm bound kW1 Sk∞ < 1. Note that
W1 (jω)
< 1, ∀ω
1 + L(jω)
⇔ |W1 (jω)| < |1 + L(jω)|,
kW1 Sk∞ < 1 ⇔
∀ω.
The last inequality says that at every frequency, the point L(jω) on the Nyquist plot lies outside
the disk of center -1, radius |W1 (jω)| (Figure 3.5).
42
CHAPTER 3. BASIC CONCEPTS
'$
6
r
|W1 |
&%
−1
r
L
Figure 3.5: Performance specification graphically.
Other performance problems could be posed by focusing on the response to the other two
exogenous inputs, d and n. Note that the transfer functions from d, n to e, u are given by
d
PS S
e
,
=−
n
T CS
u
where
T := 1 − S =
PC
,
1 + PC
called the complementary sensitivity function.
Various performance specifications could be made using weighted versions of the transfer functions above. Note that a performance spec with weight W on P S is equivalent to the weight W P on
S. Similarly, a weight W on CS = T /P is equivalent to the weight W/P on T . Thus performance
specs that involve e result in weights on S and performance specs on u result in weights on T .
Essentially all problems in this book boil down to weighting S or T or some combination, and the
tradeoff between making S small and making T small is the main issue in design.
Exercises
1. Consider the unity-feedback system [F (s) = 1]. The definition of internal stability is that all
nine closed-loop transfer functions should be stable. In the unity-feedback case, it actually
suffices to check only two of the nine. Which two?
2. In this problem and the next, there is a mixture of the time and frequency domains, so theˆ
-convention is in force.
Let
P̂ (s) =
1
,
10s + 1
Ĉ(s) = k,
F̂ (s) = 1.
Find the least positive gain k so that the following are all true:
(a) The feedback system is internally stable.
(b) |e(∞)| ≤ 0.1 when r(t) is the unit step and n = d = 0.
3.4. PERFORMANCE
43
(c) kyk∞ ≤ 0.1 for all d(t) such that kdk2 ≤ 1 when r = n = 0.
3. For the setup in Figure 3.4, take r = n = 0, d(t) = sin(ωt)1(t). Prove that if the feedback
system is internally stable, then y(t) → 0 as t → ∞ iff either P̂ has a zero at s = jω or Ĉ has
a pole at s = jω.
4. Consider the feedback system with plant P and sensor F . Assume that P is strictly proper
and F is proper. Find conditions on P and F for the existence of a proper controller so that
The feedback system is internally stable.
y(t) − r(t) → 0 when r is a unit step.
y(t) → 0 when d is a sinusoid of frequency 100 rad/s.
Notes and References
The material in Sections 3.1 to 3.3 is quite standard. However, Section 3.4 reflects the more recent
viewpoint of Zames (1981), who formulated the problem of optimizing W1 S with respect to the
∞-norm, stressing the role of the weight W1 . Additional motivation for this problem is offered in
Zames and Francis (1983).
44
CHAPTER 3. BASIC CONCEPTS
Chapter 4
Uncertainty and Robustness
No mathematical system can exactly model a physical system. For this reason we must be aware
of how modeling errors might adversely affect the performance of a control system. This chapter
begins with a treatment of various models of plant uncertainty. Then robust stability, stability in
the face of plant uncertainty, is studied using the small-gain theorem. The final topic is robust
performance, guaranteed tracking in the face of plant uncertainty.
4.1
Plant Uncertainty
The basic technique is to model the plant as belonging to a set P. The reasons for doing this were
presented in Chapter 1. Such a set can be either structured or unstructured.
For an example of a structured set consider the plant model
s2
1
.
+ as + 1
This is a standard second-order transfer function with natural frequency 1 rad/s and damping ratio
a/2—it could represent, for example, a mass-spring-damper setup or an R-L-C circuit. Suppose
that the constant a is known only to the extent that it lies in some interval [amin , amax ]. Then the
plant belongs to the structured set
1
P=
: amin ≤ a ≤ amax .
s2 + as + 1
Thus one type of structured set is parametrized by a finite number of scalar parameters (one
parameter, a, in this example). Another type of structured uncertainty is a discrete set of plants,
not necessarily parametrized explicitly.
For us, unstructured sets are more important, for two reasons. First, we believe that all models
used in feedback design should include some unstructured uncertainty to cover unmodeled dynamics,
particularly at high frequency. Other types of uncertainty, though important, may or may not
arise naturally in a given problem. Second, for a specific type of unstructured uncertainty, disk
uncertainty, we can develop simple, general analysis methods. Thus the basic starting point for an
unstructured set is that of disk-like uncertainty. In what follows, multiplicative disk uncertainty
is chosen for detailed study. This is only one type of unstructured perturbation. The important
point is that we use disk uncertainty instead of a more complicated description. We do this because
45
46
CHAPTER 4. UNCERTAINTY AND ROBUSTNESS
it greatly simplifies our analysis and lets us say some fairly precise things. The price we pay is
conservativeness.
Multiplicative Perturbation
Suppose that the nominal plant transfer function is P and consider perturbed plant transfer functions of the form P̃ = (1 + ∆W2 )P . Here W2 is a fixed stable transfer function, the weight, and
∆ is a variable stable transfer function satisfying k∆k∞ < 1. Furthermore, it is assumed that no
unstable poles of P are canceled in forming P̃ . (Thus, P and P̃ have the same unstable poles.)
Such a perturbation ∆ is said to be allowable.
The idea behind this uncertainty model is that ∆W2 is the normalized plant perturbation away
from 1:
P̃
− 1 = ∆W2 .
P
Hence if k∆k∞ ≤ 1, then
P̃ (jω)
− 1 ≤ |W2 (jω)|,
P (jω)
∀ω,
so |W2 (jω)| provides the uncertainty profile. This inequality describes a disk in the complex plane:
At each frequency the point P̃ /P lies in the disk with center 1, radius |W2 |. Typically, |W2 (jω)|
is a (roughly) increasing function of ω: Uncertainty increases with increasing frequency. The main
purpose of ∆ is to account for phase uncertainty and to act as a scaling factor on the magnitude of
the perturbation (i.e., |∆| varies between 0 and 1).
Thus, this uncertainty model is characterized by a nominal plant P together with a weighting
function W2 . How does one get the weighting function W2 in practice? This is illustrated by a few
examples.
Example 1 Suppose that the plant is stable and its transfer function is arrived at by means of
frequency-response experiments: Magnitude and phase are measured at a number of frequencies,
ωi , i = 1, . . . , m, and this experiment is repeated several, say n, times. Let the magnitude-phase
measurement for frequency ωi and experiment k be denoted (Mik , φik ). Based on these data select
nominal magnitude-phase pairs (Mi , φi ) for each frequency ωi , and fit a nominal transfer function
P (s) to these data. Then fit a weighting function W2 (s) so that
Mik ejφik
− 1 ≤ |W2 (jωi )|,
Mi ejφi
i = 1, . . . , m; k = 1, . . . , n.
Example 2 Assume that the nominal plant transfer function is a double integrator:
P (s) =
1
.
s2
For example, a dc motor with negligible viscous damping could have such a transfer function. You
can think of other physical systems with only inertia, no damping. Suppose that a more detailed
model has a time delay, yielding the transfer function
P̃ (s) = e−τ s
1
,
s2
4.1. PLANT UNCERTAINTY
47
and suppose that the time delay is known only to the extent that it lies in the interval 0 ≤ τ ≤ 0.1.
This time-delay factor exp(−τ s) can be treated as a multiplicative perturbation of the nominal
plant by embedding P̃ in the family
{(1 + ∆W2 )P : k∆k∞ ≤ 1}.
To do this, the weight W2 should be chosen so that the normalized perturbation satisfies
P̃ (jω)
− 1 ≤ |W2 (jω)|,
P (jω)
∀ω, τ,
that is,
e−τ jω − 1 ≤ |W2 (jω)|,
∀ω, τ.
A little time with Bode magnitude plots shows that a suitable first-order weight is
W2 (s) =
0.21s
.
0.1s + 1
Figure 4.1 is the Bode magnitude plot of this W2 and exp(−τ s) − 1 for τ = 0.1, the worst value.
10 1
10 0
10 -1
10 -2
10 -3
10 -1
10 0
10 1
10 2
10 3
Figure 4.1: Bode plots of W2 (dash) and exp(−0.1s) − 1 (solid).
To get a feeling for how conservative this is, compare at a few frequencies ω the actual uncertainty
set
(
)
P̃ (jω)
: 0 ≤ τ ≤ 0.1 = e−τ jω : 0 ≤ τ ≤ 0.1
P (jω)
48
CHAPTER 4. UNCERTAINTY AND ROBUSTNESS
with the covering disk
{s : |s − 1| ≤ |W2 (jω)|}.
Example 3 Suppose that the plant transfer function is
P̃ (s) =
k
,
s−2
where the gain k is uncertain but is known to lie in the interval [0.1, 10]. This plant too can be
embedded in a family consisting of multiplicative perturbations of a nominal plant
P (s) =
k0
.
s−2
The weight W2 must satisfy
P̃ (jω)
− 1 ≤ |W2 (jω)|,
P (jω)
∀ω, k,
that is,
max
0.1≤k≤10
k
− 1 ≤ |W2 (jω)|,
k0
∀ω.
The left-hand side is minimized by k0 = 5.05, for which the left-hand side equals 4.95/5.05. In this
way we get the nominal plant
P (s) =
5.05
s−2
and constant weight W2 (s) = 4.95/5.05.
The multiplicative perturbation model is not suitable for every application because the disk
covering the uncertainty set is sometimes too coarse an approximation. In this case a controller
designed for the multiplicative uncertainty model would probably be too conservative for the original
uncertainty model.
The discussion above illustrates an important point. In modeling a plant we may arrive at a
certain plant set. This set may be too awkward to cope with mathematically, so we may embed it in
a larger set that is easier to handle. Conceivably, the achievable performance for the larger set may
not be as good as the achievable performance for the smaller; that is, there may exist—even though
we cannot find it—a controller that is better for the smaller set than the controller we design for
the larger set. In this sense the latter controller is conservative for the smaller set.
In this book we stick with plant uncertainty that is disk-like. It will be conservative for some
problems, but the payoff is that we obtain some very nice theoretical results. The resulting theory
is remarkably practical as well.
4.1. PLANT UNCERTAINTY
49
Other Perturbations
Other uncertainty models are possible besides multiplicative perturbations, as illustrated by the
following example.
Example 4 As at the start of this section, consider the family of plant transfer functions
s2
1
,
+ as + 1
0.4 ≤ a ≤ 0.8.
Thus
a = 0.6 + 0.2∆,
−1 ≤ ∆ ≤ 1,
so the family can be expressed as
P (s)
,
1 + ∆W2 (s)P (s)
−1 ≤ ∆ ≤ 1,
where
P (s) :=
s2
1
,
+ 0.6s + 1
W2 (s) := 0.2s.
Note that P is the nominal plant transfer function for the value a = 0.6, the midpoint of the interval.
The block diagram corresponding to this representation of the plant is shown in Figure 4.2. Thus
∆
−
?
-
-
W2
P
-
Figure 4.2: Example 4.
the original plant has been represented as a feedback uncertainty around a nominal plant.
The following list summarizes the common uncertainty models:
(1 + ∆W2 )P
P + ∆W2
P/(1 + ∆W2 P )
P/(1 + ∆W2 )
Appropriate assumptions would be made on ∆ and W2 in each case. Typically, we can relax the
assumption that ∆ be stable; but then the theorems to follow would be harder to prove.
50
4.2
CHAPTER 4. UNCERTAINTY AND ROBUSTNESS
Robust Stability
The notion of robustness can be described as follows. Suppose that the plant transfer function P
belongs to a set P, as in the preceding section. Consider some characteristic of the feedback system,
for example, that it is internally stable. A controller C is robust with respect to this characteristic
if this characteristic holds for every plant in P. The notion of robustness therefore requires a
controller, a set of plants, and some characteristic of the system. For us, the two most important
variations of this notion are robust stability, treated in this section, and robust performance, treated
in the next.
A controller C provides robust stability if it provides internal stability for every plant in P. We
might like to have a test for robust stability, a test involving C and P. Or if P has an associated
size, the maximum size such that C stabilizes all of P might be a useful notion of stability margin.
The Nyquist plot gives information about stability margin. Note that the distance from the
critical point -1 to the nearest point on the Nyquist plot of L equals 1/kSk∞ :
distance from -1 to Nyquist plot = inf | − 1 − L(jω)|
ω
= inf |1 + L(jω)|
ω
−1
1
= sup
ω |1 + L(jω)|
= kSk−1
∞.
Thus if kSk∞ ≫ 1, the Nyquist plot comes close to the critical point, and the feedback system is
nearly unstable. However, as a measure of stability margin this distance is not entirely adequate
because it contains no frequency information. More precisely, if the nominal plant P is perturbed
to P̃ , having the same number of unstable poles as has P and satisfying the inequality
|P̃ (jω)C(jω) − P (jω)C(jω)| < kSk−1
∞,
∀ω,
then internal stability is preserved (the number of encirclements of the critical point by the Nyquist
plot does not change). But this is usually very conservative; for instance, larger perturbations could
be allowed at frequencies where P (jω)C(jω) is far from the critical point.
Better stability margins are obtained by taking explicit frequency-dependent perturbation models: for example, the multiplicative perturbation model, P̃ = (1 + ∆W2 )P . Fix a positive number
β and consider the family of plants
{P̃ : ∆ is stable and k∆k∞ ≤ β}.
Now a controller C that achieves internal stability for the nominal plant P will stabilize this entire
family if β is small enough. Denote by βsup the least upper bound on β such that C achieves internal
stability for the entire family. Then βsup is a stability margin (with respect to this uncertainty
model). Analogous stability margins could be defined for the other uncertainty models.
We turn now to two classical measures of stability margin, gain and phase margin. Assume
that the feedback system is internally stable with plant P and controller C. Now perturb the plant
to kP , with k a positive real number. The upper gain margin, denoted kmax , is the first value of
k greater than 1 when the feedback system is not internally stable; that is, kmax is the maximum
number such that internal stability holds for 1 ≤ k < kmax . If there is no such number, then set
4.2. ROBUST STABILITY
51
kmax := ∞. Similarly, the lower gain margin, kmin , is the least nonnegative number such that
internal stability holds for kmin < k ≤ 1. These two numbers can be read off the Nyquist plot of L;
for example, −1/kmax is the point where the Nyquist plot intersects the segment (−1, 0) of the real
axis, the closest point to −1 if there are several points of intersection.
Now perturb the plant to e−jφ P , with φ a positive real number. The phase margin, φmax , is the
maximum number (usually expressed in degrees) such that internal stability holds for 0 ≤ φ < φmax .
You can see that φmax is the angle through which the Nyquist plot must be rotated until it passes
through the critical point, −1; or, in radians, φmax equals the arc length along the unit circle from
the Nyquist plot to the critical point.
Thus gain and phase margins measure the distance from the critical point to the Nyquist plot
in certain specific directions. Gain and phase margins have traditionally been important measures
of stability robustness: if either is small, the system is close to instability. Notice, however, that the
gain and phase margins can be relatively large and yet the Nyquist plot of L can pass close to the
critical point; that is, simultaneous small changes in gain and phase could cause internal instability.
We return to these margins in Chapter 11.
Now we look at a typical robust stability test, one for the multiplicative perturbation model.
Assume that the nominal feedback system (i.e., with ∆ = 0) is internally stable for controller C.
Bring in again the complementary sensitivity function
T =1−S =
PC
L
=
.
1+L
1 + PC
Theorem 1 (Multiplicative uncertainty model) C provides robust stability iff kW2 T k∞ < 1.
Proof (⇐) Assume that kW2 T k∞ < 1. Construct the Nyquist plot of L, indenting D to the left
around poles on the imaginary axis. Since the nominal feedback system is internally stable, we know
this from the Nyquist criterion: The Nyquist plot of L does not pass through -1 and its number
of counterclockwise encirclements equals the number of poles of P in Res ≥ 0 plus the number of
poles of C in Res ≥ 0.
Fix an allowable ∆. Construct the Nyquist plot of P̃ C = (1+∆W2 )L. No additional indentations
are required since ∆W2 introduces no additional imaginary axis poles. We have to show that
the Nyquist plot of (1 + ∆W2 )L does not pass through -1 and its number of counterclockwise
encirclements equals the number of poles of (1 + ∆W2 )P in Re s ≥ 0 plus the number of poles of C
in Re s ≥ 0; equivalently, the Nyquist plot of (1 + ∆W2 )L does not pass through -1 and encircles
it exactly as many times as does the Nyquist plot of L. We must show, in other words, that the
perturbation does not change the number of encirclements.
The key equation is
1 + (1 + ∆W2 )L = (1 + L)(1 + ∆W2 T ).
(4.1)
Since
k∆W2 T k∞ ≤ kW2 T k∞ < 1,
the point 1 + ∆W2 T always lies in some closed disk with center 1, radius < 1, for all points s on D.
Thus from (4.1), as s goes once around D, the net change in the angle of 1 + (1 + ∆W2 )L equals
the net change in the angle of 1 + L. This gives the desired result.
52
CHAPTER 4. UNCERTAINTY AND ROBUSTNESS
(⇒) Suppose that kW2 T k∞ ≥ 1. We will construct an allowable ∆ that destabilizes the feedback
system. Since T is strictly proper, at some frequency ω,
|W2 (jω)T (jω)| = 1.
(4.2)
Suppose that ω = 0. Then W2 (0)T (0) is a real number, either +1 or −1. If ∆ = −W2 (0)T (0), then
∆ is allowable and
1 + ∆W2 (0)T (0) = 0.
From (4.1) the Nyquist plot of (1 + ∆W2 )L passes through the critical point, so the perturbed
feedback system is not internally stable.
If ω > 0, constructing an admissible ∆ takes a little more work; the details are omitted.
The theorem can be used effectively to find the stability margin βsup defined previously. The
simple scaling technique
{P̃ = (1 + ∆W2 )P : k∆k∞ ≤ β} = {P̃ = (1 + β −1 ∆βW2 )P : kβ −1 ∆k∞ ≤ 1}
= {P̃ = (1 + ∆1 βW2 )P : k∆1 k∞ ≤ 1}
together with the theorem shows that
βsup = sup{β : kβW2 T k∞ < 1} = 1/kW2 T k∞ .
The condition kW2 T k∞ < 1 also has a nice graphical interpretation. Note that
W2 (jω)L(jω)
< 1, ∀ω
1 + L(jω)
⇔ |W2 (jω)L(jω)| < |1 + L(jω)|,
kW2 T k∞ < 1 ⇔
∀ω.
The last inequality says that at every frequency, the critical point, -1, lies outside the disk of center
L(jω), radius |W2 (jω)L(jω)| (Figure 4.3).
−1
r
'$
r - |W2 L|
&%
L
Figure 4.3: Robust stability graphically.
There is a simple way to see the relevance of the