Related Reading
[KK]: 8, 9, 11 (stationary version)
1/82
The H2 -Norm
Consider the LTI system with state-space description
ẋ = Ax + Bw, z = Cx
and with transfer matrix
T (s) = C(sI − A)−1 B of dimension p × q.
Here w is a disturbance input and z is an output of interest that is
desired to be small. A quantification of the effect of the input w onto
the output z is the so-called H2 -norm of the transfer matrix.
Definition 1 Let T have all its poles in the open left half-plane.
The H2 -norm of T is defined as
s Z
∞
1
kT k2 := kT (iω)k2F dω
2π −∞
with k.kF denoting the Frobenius matrix norm.
2/82
Relation to Hardy-Spaces
The Hardy-space H2p×q consists of all matrices S of dimension p × q
whose elements are analytic functions in the open right half-plane s.th.
Z ∞
2 1
kSk2 := sup kS(r + iω)k2F dω is finite.
r>0 2π −∞
For all such functions one can show that the limit
T̂ (iω) := lim S(r + iω)
r&0
The subspace of all strictly proper and stable real rational matrices
is denoted as RH2p×q . The subspace RH2p×q is dense in H2p×q .
One can show that F̂ is in H2p×q . Parseval’s theorem just means that the
Fourier transform is a linear isometry Lp×q
2 [0, ∞) → H2
p×q
. A version of
the Payley-Wiener theorem establishes that this map is even surjective.
Therefore Lp×q
2 [0, ∞) and H2
p×q
are actually isometrically isomorphic.
See: B.A. Francis, A course in H∞ -Control, Springer LNCIS 88, 1987.
4/82
Computation
It is a beautiful fact that the H2 -norm of a stable transfer matrix can
be computed algebraically on the basis of a state-space realization.
7/82
Random Vectors
Uncertain outcomes of experiments are modeled by random vectors
x = (x1 · · · xn )T . Here x is a vector of n random variables x1 , . . . , xn
and is characterized by its distribution function Fx : Rn → R which
admits the following interpretation: If (ξ1 · · · ξn )T ∈ Rn , the probability
for the event x1 ≤ ξ1 , ..., xn ≤ ξn to happen is given by Fx (ξ1 , ..., ξn ).
8/82
Expectation and Covariance
Suppose g : Rn → Rk×l is Borel measurable. If the random vector
x = (x1 · · · xn )T has the density fx (τ1 , ..., τn ), the expectation of
g(x1 , ..., xn ) is a matrix in Rk×l and defined as
Z +∞ Z +∞
E[g(x1 , ..., xn )] = ··· g(τ1 , ..., τn )fx (τ1 , ..., τn ) dτn · · · dτ1 .
−∞ −∞
Instead let us just collect some basic facts that are required in the sequel.
1) There exists a Wiener-process W (t) for t ≥ 0 with intensity 1:
• Initialized at zero: W (0) = 0 with probability one.
• Independent increments: For all 0 ≤ t1 ≤ t2 ≤ t3 ≤ t4 , the random
variables W (t2 ) − W (t1 ) and W (t4 ) − W (t3 ) are independent.
• Gaussian increments: For all 0 ≤ t1 ≤ t2 , the increment W (t2 ) −
W (t1 ) is Gaussian with expectation 0 and variance t2 −t1 = 1|t2 −t1 |.
10/82
Simulation of Standard Wiener Process
Realizations of Wiener Process
8
−2
−4
0 1 2 3 4 5 6 7 8 9 10
14/82
White Noise and System Response
Let us come back to slide 2 with w being interpreted as the “derivative
of a Wiener process”; we (formally) denote it by Ẇ . In this sense “W
can be obtained by integrating white noise”:
Z t Z t
W (t) = “ Ẇ (τ ) dτ “ = dW (τ ) for t ≥ 0.
0 0
The middle expression is NOT sensible mathematically. But we can now
just define precisely what we mean by the state-response of a linear
system to a white noise input and a random initial condition ξ. Tacitly,
we assume that ξ is Gaussian and independent from W (t) for all t ≥ 0.
15/82
White Noise and System Response
According to our preparatory remarks x(.) is a Gaussian process.
If just applying the rules given above, the expectation of x(t) and the
covariance matrix of x(t1 ) and x(t2 ) are easily determined.
2
200
0
100
−2
−4 0
0 5 10 −5 0 5
2 300
0 200
−2 100
−4 0
0 5 10 −5 0 5
18/82
Colored Noise
Definition 7 We say that w̃ is colored noise if there exists
(Ã, B̃, C̃) with eig(Ã) ⊂ C− such that w̃ is the output of
x̃˙ = Ãx̃ + B̃ Ẇ , w̃ = C̃ x̃, x̃(0) = 0.
22/82
Coloring Filters
Theorem 10 The spectral density of w̃ is given by
R̂(iω) = T̃ (iω)T̃ (iω)∗ .
Hence R̂(iω) is Hermitian and positive semi-definite for all ω ∈ R.
0.8
0.6
−2
10
0.4
0.2
−4
0 10 −2 0 2 4
−0.1 −0.05 0 0.05 0.1 10 10 10 10 24/82
Coloring Filters: Construction
We are now prepared for discussing how to determine coloring filters
in practice. By measurements one (statistically) estimates the spectral
density R̂(iω) of the process under scrutiny. One then approximates the
experimentally determined spectral density by G(iω) where G(s) is a
strictly proper real rational function without poles in C0 and with
G(iω) = G(iω)∗ and G(iω) < 0 for all ω ∈ R. (∗)
Finally, the coloring filter is obtained by spectral factorization.
This implies G(iω) = T (iω)T (iω)∗ such that T is a coloring filter for
modeling noise with the spectral density G, just as we desired. Often T
is also called a spectral factor of G. 25/82
Example
1−s2
Consider the transfer function G(s) = s4 −13s2 +36
. We have
1 + ω2
G(iω) = > 0 for ω ∈ R.
ω 4 + 13ω 2 + 36
Hence, by Theorem 11, G has a spectral factorization. This can be seen
directly as follows. The numerator and denominator of G(s) can be
factorized as
(1 + s)(1 − s) and (3 + s)(2 + s)(2 − s)(3 − s)
respectively. The symmetric location of the poles and zeros with respect
to the imaginary axis is a consequence of G(iω) being real and positive
for ω ∈ R. Obviously, both transfer functions
1+s 1−s
T− (s) = and T+ (s) =
(3 + s)(2 + s) (3 + s)(2 + s)
are spectral factors of G. They distinguish themselves in that T− shares
it stable zero(s) with G, while T+ shares its anti-stable zero(s) with G.
26/82
System Description for Design
ẋ = Ax + Bw w + Bu z w
z = Cz x + Dzw w + Dz u P
y u
y = Cx + Dw w + Du
28/82
LQG-Control
One particular scenario is worth mentioning. Consider the system
ẋ = Ax + B1 Ẇ1 + Bu
with control input u and process noise B1 Ẇ1 . Suppose the measure-
ments Cx are corrupted by white noise Ẇ2 where W1 and W2 are inde-
pendent Wiener processes. The measured output hence is
y = Cx + D2 Ẇ2 .
As in LQ-control, we are interested in keeping the linear combinations
C1 x and D1 u of the states and the controls small. Hence we choose
C1 x
z=
D1 u
as the performance output. The LQG-control goal is to find a stabilizing
controller which minimizes lim tr cov(z(t), z(t)), which equals
t→∞
lim E[z(t)T z(t)] = lim E x(t)T C1T C1 x(t) + u(t)T D1T D1 u(t) .
t→∞ t→∞
29/82
LQG-Control
Let us define the following generalized plant
ẋ = Ax + B1 0 w + Bu
C1 0
z = x+ u
0 D1
y = Cx + 0 D2 w.
Hence B1 B1T and D2 D2T are the intensities of these processes. 30/82
LQG-Control
If Ẇ1 is not white but colored noise w̃1 , one absorbs the related coloring
filter T (s) = C̃(sI − Ã)−1 B̃ into the generalized plant and solves the
H2 -problem for the weighted generalized plant
ẋ A B1 C̃ x 0 0 B
= + w+ u
x̃ 0 Ã x̃ B̃ 0 0
C1 0 x 0
z = + u
0 0 x̃ D1
x
y = C 0 + 0 D2 w.
x̃
The block-diagram of the weighted generalized plant is more instructive:
w̃1 w1
z T
w2
P
y
u
31/82
Static State-Feedback Synthesis
Let us first consider the case that the whole state is available for control.
For this purpose we consider the open-loop system
ẋ = Ax + Bw w + Bu
z = Cz x + Dz u
under the following hypotheses:
• (A, B) is stabilizable.
• (A, Cz ) has no unobservable modes in C0 .
• DzT Cz Dz = 0 I .
Control action
200
−200
0 2 4 6 8 10
Reference (red) and output
2
−2
0 2 4 6 8 10
36/82
Example
Assume that the control input is affected by colored noise with filter
134.2/(s + 10) and having H2 -norm 30 in order to clearly display its
effect. We get the following substantially deteriorated response:
Control action
200
−200
0 2 4 6 8 10
Reference (red) and output
2
−2
0 2 4 6 8 10
37/82
Example
With the coloring filter included in the system description, design an
optimal H2 -state-feedback gain for the cost function
2000|φ|2 + 0.3|u|2 .
The feed-forward gain is adjusted appropriately. The cost has been tuned
so that the noise-free response resembles the one we obtained earlier:
Control action
200
−200
0 2 4 6 8 10
Reference (red) and output
2
−2
0 2 4 6 8 10
38/82
Example
The noisy closed-loop response shows that the effect of the noise onto
the to-be-tracked output is visibly reduced:
Control action
200
−200
0 2 4 6 8 10
Reference (red) and output
2
−2
0 2 4 6 8 10
39/82
Kalman Filtering
Consider again the full generalized plant
ẋ = Ax + Bw w + Bu, z = Cz x + Dz u, y = Cx + Dw w.
If w = 0, an observer for this system is defined as
x̂˙ = Ax̂ + Bu + L(y − ŷ), ẑ = Cz x̂ + Dz u, ŷ = C x̂
where L is taken with eig(A − LC) ⊂ C− ; then the observer state
asymptotically reconstructs the system’s state.
can serve as a measure for how well ẑ(t) approximates z(t) for t → ∞.
43/82
Remarks
• Note that the optimal observer does not depend on Cz or Dz ! In
particular it is as well an optimal H2 -observer for the full state x
(with the choices Cz = I and Dz = 0) with optimal value tr(Q).
• An optimal H2 -estimator is an unstructured LTI system with inputs
u and y, which generates an asymptotic state-estimate x̂ such that
Cz x̂+Dz u is an optimal estimate of z in the H2 -sense. One can prove
that general estimators do not offer any benefit over observers!
• In view of the stochastic interpretation of the H2 -norm, optimal H2 -
observers minimize the asymptotic variance of z − ẑ if w is white
noise. Then the optimal observer is the celebrated Kalman-Filter.
44/82
Example
Consider again the two-compartment model ([AM] pp.85):
−k0 − k1 k1 0 b0
ċ = c+ w1 + u
k2 −k2 λ1 0
y = 1 0 c + λ2 w 2
1 1
0
0 0
−1
−1 −1
−2 −2 −2
0 5 10 0 5 10 0 5 10
2 2 1
1 1
0
0 0
−1
−1 −1
−2 −2 −2
0 5 10 0 5 10 0 5 10
46/82
Example
System and observer responses as well as errors for λ1 = λ2 = 0.1:
4 4 1
2 0
2
0 −1
0
−2 −2
−2 −4 −3
0 5 10 0 5 10 0 5 10
2 2 1
1 0
0
0 −1
−2
−1 −2
−2 −4 −3
0 5 10 0 5 10 0 5 10
47/82
Example
System and observer responses as well as errors for λ1 = λ2 = 1:
4 4 4
2 2 2
0 0 0
−2 −2 −2
−4 −4 −4
0 5 10 0 5 10 0 5 10
4 4 4
2 2
2
0 0
0
−2 −2
−2 −4 −4
0 5 10 0 5 10 0 5 10
48/82
The Output-Feedback H2 -Control Problem
Open-loop system P :
ẋ = Ax + Bw w + Bu z w
z = Cz x + Dz u P
y = Cx + Dw w
y u
Controller K:
ẋK = AK xK + BK y K
u = C K xK
Controlled system described as ξ˙ = Aξ + Bw, z = Cξ with
A BCK Bw
A B
= BK C AK BK Dw .
C D
Cz Dz CK 0
49/82
Hypotheses
We derive a solution to this optimal synthesis problem in terms of AREs
under the following assumptions:
1. (A, B) is stabilizable and (A, C) is detectable.
Are required for the existence of a stabilizing controller.
T
Bw T 0
2. Dz Cz Dz = 0 I and Dw = .
Dw I
It’s essential that Dz and Dw have full column and row rank. The
other properties are introduced to simplify the formulas.
AQ + QAT − QC T CQ + Bw BwT = 0,
an H2 -optimal controller is given as
ẋK = (A − BB T P − QC T C)xK + QC T y, u = −B T P xK
and the corresponding optimal closed-loop H2 -norm is
p
tr(BwT P Bw ) + tr(B T P QP B).
The proof is given in the appendix. The following remarks on the inter-
pretation and on generalizations of this result are essential.
51/82
Output-Feedback Control: The Separation Principle
The H2 -optimal state-feedback gain is F = B T P while L = QC T is the
H2 -optimal observer gain. With these, the optimal H2 -controller can be
written as ẋK = (A − BF − LC)xK + Ly, u = −F xK or as
ẋK = AxK + Bu + L(y − ŷ), ŷ = CxK , u = −F xK .
ẋ = Ax + Bw w + Bu z w
z = Cz x + Dzw w + Dz u P
y u
y = Cx + Dw w + Du
ẋ = Ax + Bw w + Bwe we + Bu
z w
z = Cz x + Dzw w + Dz u ze P we
ze = Cze x + Dze u y u
y = Cx + Dw w + Dwe we + Du
The perturbed system satisfies the required hypothesis and one can find
an optimal H2 -controller K with optimal value γ . Now interconnect
K with the the original system P with closed-loop transfer matrix T .
−2
0 2 4 6 8 10
Control action
20
0
−20
0 2 4 6 8 10
Reference (red) and output
2
−2
0 2 4 6 8 10
56/82
Example
For increased noise-levels we see the benefit of LQG-control:
Control action
20
0
−20
0 2 4 6 8 10
Reference (red) and output
2
−2
0 2 4 6 8 10
Control action
20
0
−20
0 2 4 6 8 10
Reference (red) and output
2
−2
0 2 4 6 8 10 57/82
Example
However, comparisons of this sort can be misleading:
• Obviously the response of the pole-placement observer to non-zero in-
itial conditions is faster than that of the LQG controller. This explains
its higher sensitivity to noise. Slowing down the observer poles does
not alter the tracking behavior (a lot), but it reduces the sensitivity
to noise.
• For this simple example, one can obtain similar designs by pole-
placement and LQG-synthesis after tuning. In practice, modern syn-
thesis tools rather serve to reduce the time required for tuning a
controller, while the optimality properties are not that crucial.
• LQG output-feedback controllers suffer from an essential deficiency:
There are no guarantees for robustness! This lead to the development
of dedicated tools for robust controller synthesis. You are now well-
prepared to enter this exciting field of control.
58/82
Appendix
59/82
Proof of Theorem 11: Step 1
Choose a minimal realization G(s) = CG (sI −AG )−1 BG . Since G has no
poles in C0 , the matrix AG has only eigenvalues in C− or C+ . In suitable
coordinates we can assume that the realization has the structure
A1 0 B1
0 A2 B2 , eig(A1 ) ⊂ C− , eig(A2 ) ⊂ C+ .
C1 C2 0
Therefore
G(s) = C1 (sI − A1 )−1 B1 + C2 (sI − A2 )−1 B2 .
Observe that
G(−s)T = B2T (−sI − AT2 )−1 C2T + B1T (−sI − AT1 )−1 C1T .
Since G(iω)∗ = G(−iω)T , we have G(s) = G(−s)T for s ∈ C0 and
hence for all s ∈ C (different from poles of G(s) and G(−s)T ). Thus
the stable and anti-stable parts in the additive decomposition of G(s)
and G(−s)T coincide. 59/82
Proof of Theorem 11: Step 1
Therefore
C1 (sI −A1 )−1 B1 = B2T (−sI −AT2 )−1 C2T = (−B2T )(sI −(−AT2 ))−1 C2T .
By realization minimality, there exists a non-singular T such that
A1 = −T AT2 T −1 , B1 = T C2T , C1 = −B2T T −1 .
Performing yet another state-coordinate change proves the following
intermediate fact.
62/82
Proof of the Positive Real Lemma
According to slide 66, we need to show for any ω ∈ R that
(A − νBC)T − iωI −νC T C
det 6= 0. (∗ ∗)
νBB T −(A − νBC) − iωI
For this purpose note that G(−iω) + ν1 I 0 and hence
1
J(−iω)∗ + J(−iω) + I 0.
ν
In particular the left-hand side is non-singular; it is also clearly (check!)
the Schur complement of
T
A − iωI 0 CT
0 −A − iωI −B (∗ ∗ ∗)
T 1
B C ν
I
with respect to the left-upper block. Since A has no eigenvalues in C0
we infer that (∗ ∗ ∗) is non-singular. Hence the Schur-complement with
respect to the right-lower block is also non-singular. This is (∗ ∗).
63/82
Proof of the Positive Real Lemma
Let Xν− and Xν+ denote the stabilizing and anti-stabilizing solution of
(ARE). Now observe for ν ≥ µ that
Remark: The same argument applies to Xν+ . We get two limits which
lead to different spectral factors that are distinct in the properties of
their zeros. A detailed exposition goes beyond this course.
64/82
Example for Theorem 11
1−s2
Consider G(s) = s4 −13s 2 +36 . Determine a minimal realization of G and
65/82
Riccati Theory: Addendum I
With controllable (A, B), positive definite R and just a symmetric Q,
let us consider the algebraic Riccati equation
AT P + P A − P BR−1 B T P + Q = 0.
68/82
Proof: Step 1
The kernel N (P ) of P is A-invariant and contained in N (C).
Indeed, with x satisfying P x = 0, we infer from xT (ARE)x = 0 that
kCxk2 = 0 and thus Cx = 0. Then (ARE)x = 0 implies P Ax = 0.
This proves the claim.
70/82
Proof: Step 3 - The Coup de Grâce
By last property on slide 66 we conclude Q1 Y1 . (This requires some
work; please check!) Due to
−1
Y1 Y12 X1 X12
=
Y21 Y2 X21 X2
and the block-inversion formula we have
P1 = Q−1
1 ≺ Y1
−1
= X1 − X12 X2−1 X21 .
With X2 0 we infer
X1 − X12 X2−1 X21 0
P1 0
≺ .
0 0 0 X2
I X12 X2−1
A congruence transformation with finally leads to
0 I
P1 0 X1 X12
≺
0 0 X21 X2
which finishes the proof.
71/82
Proof of Theorem 14
If γopt denotes the optimal value of the H2 -problem, the first step is
to show tr(BwT P Bw + B T P QP B) ≤ γopt2
. Choose γ > γopt . We can
then find AK , BK , CK which render A Hurwitz and such that kC(sI −
A)−1 Bk22 < γ 2 . By simply expanding the controller realization with
stable uncontrollable or unobservable modes, we can assume that AK
has at least the same dimension as A. Moreover, by slide 6 there exists
some X 0 with AT X + X A + C T C ≺ 0 and tr(B T X B) < γ 2 . With
Z := B T X B + I and suitably small > 0 we then infer
T T X XB
A X + X A + C C ≺ 0, 0, tr(Z) < γ 2 . (1)
BT X Z
Let us first prepare some relations for the proof. In view of (1) we need
T T T T T SX S T SX B
SA X S + SX AS + SC CS ,
BT X S T Z
which are equal to
T T T T T RS T RB
SA R + RAS + SC CS , . (3)
B T RT Z
First note that
T Y 0 I I Y Y
RS = =
X U V 0 Y X
where we exploited the fact that this matrix is symmetric.
73/82
Proof of Theorem 14
Moreover, all other blocks in (3) can be explicitly written as
Y 0 Bw Y Bw
RB = = ,
X U BK Dw XBw + U BK Dw
T
I I
CS = Cz Dz CK = Cz + Dz CK V Cz
V 0
and
T Y 0 A BCK I I
RAS = =
X U BK C AK V 0
Y 0 A + BCK V A
= =
X U BK C + AK V BK C
Y (A + BCK V ) YA
= .
X(A + BCK V ) + U BK C + U AK V XA + U BK C
74/82
Proof of Theorem 14
Therefore the matrices in (3) read as
T
Y Y Y Bw
R1 R21
, Y X XBw + U BK Dw (4)
R21 R2
Bw Y (∗)T
T
Z
with blocks
R1 = (A + BCK V )T Y +Y (A + BCK V )+(Cz + Dz CK V )T (Cz + Dz CK V ),
R2 = AT X + XA + U BK C + (U BK C)T + CzT Cz ,
R21 = AT (Y − P ) + ∆A + (P + ∆)BCK V + U BK C + U AK V + P BB T P.
75/82
Proof of Theorem 14
After these preparations we continue the proof. Since S has full row
rank, (1) clearly implies that R1 ≺ 0, R2 ≺ 0 and that the second
matrix in (4) is positive definite. By slide 34, R1 ≺ 0 implies P ≺ Y .
On the other hand, by taking the Schur-complement we infer
X XBw + U BK Dw I
T − T Y I Bw 0.
(XBw + U BK Dw ) Z Bw
Combining the last two inequalities implies (recalling X − P = ∆) that
∆ ∆Bw + U BK Dw
0.
(∆Bw + U BK Dw )T Z − BwT P Bw
With L = −∆−1 U BK we have ∆Bw + U BK Dw = ∆(Bw − LDw ) and
therefore Z − BwT P Bw − (Bw − LDw )T ∆(Bw − LDw ) 0. This shows
tr(Bw − LDw )T ∆(Bw − LDw ) < γ 2 − tr(BwT P Bw ) (∗)
due to (1). Moreover R2 ≺ 0 as at the bottom of slide 75 reveals
(A − LC)T ∆ + ∆(A − LC) + P BB T P ≺ 0. (∗∗)
76/82
Proof of Theorem 14
By slide 6 the inequalities (∗), (∗∗) show that A − LC is Hurwitz and
A − LC Bw − LDw
2
< γ 2 − tr(BwT P Bw ).
BT P 0
2
This allows to apply our result on H2 -estimation on slide 41. Due the
choice of Q we can hence conclude
tr(B T P QP B) + tr(BwT P Bw ) < γ 2 .
Since γ > γopt was arbitrary we finally get
2
tr(B T P QP B) + tr(BwT P Bw ) ≤ γopt .
This concludes the first part of the proof.
Equality is shown by constructing an optimal controller. For this purpose
we follow the above steps as much as possible in the reverse order,
which includes a motivation for the structure of the to-be-constructed
controller.
77/82
Proof of Theorem 14: Controller Construction
Due to our result on H2 -estimation on slide 41 (applied for Cz replaced
with B T P ) we know that A − QC T C is Hurwitz and leads to
A − QC T C Bw − QC T Dw
2
= tr(B T P QP B).
BT P 0
2
Therefore the solution ∆ of
(A − QC T C)T ∆ + ∆(A − QC T C) + P BB T P = 0 (5)
satisfies
tr(Bw − QC T Dw )T ∆(Bw − QC T Dw ) = tr(B T P QP B). (6)
∆(A − BB T P − QC T C − AK ) = 0
with AK = A − BB T P − QC T C. Further take X = ∆ + P such that
the two representations of R2 and R21 on slide 75 are indeed correct.
= tr(BwT P Bw ) + tr(B T P QP B)
due to (6). This finishes the proof.
80/82
References
• [KK] H.W. Knobloch, H. Kwakernaak, Lineare Kontrolltheorie, Springer-Verlag
Berlin 1985
• [AM] K.J. Aström, R.M. Murray, Feedback Systems: An Introduction for Scientists
and Engineers, Princeton University Press, Princeton and Oxford, 2009 (available
online for free with Wiki, just google.)
• J.P. Hespanha, Linear Systems Theory, Princeton University Press, 2009
• [S] E.D. Sontag, Mathematical Control Theory, Springer, New York 1998
• [K] T. Kailath, Linear Systems, Prentice Hall, Englewood Cliffs, 1980
• [F] B. Friedland, Control System Design: An Introduction to State-space Methods.
Dover Publications, 2005
• W.J. Rugh, Linear System Theory, Prentice-Hall, 2 1998
• R. Brockett, Finite dimensional linear systems, Wiley, 1970
• W.M. Wonham, Linear multivariable control, a geometric approach, Springer-
Verlag, 3 1985
81/82
References
• J. Zabczyk, Mathematical Control Theory: An Introduction, Birkhäuser, 2007
• B.A. Francis, A course in H∞ -control theory, Springer, 1987
• H.K. Khalil, Nonlinear Systems, Prentice Hall, 3 2002
• L. Arnold, Stochastic Differential Equations: Theory and Applications, Wiley, 1974
• A. Packard, State-space Youla Notes, Course in Multivariable Control Systems,
UC Berkeley, 2008
• J.M. Coron, Control and nonlinearity, Mathematical Surveys and Monographs,
2007
• Y. Yuan, G.-B. Stan, S. Warnick, and J.M. Goncalves, Minimal realization of the
dynamical structure function and its application to network reconstruction, IEEE
Transactions on Automatic Control, http://www.bg.ic.ac.uk/research/g.stan/,
2012
• J.S. Freudenberg with C.V. Hollot and D.P. Looze, A first graduate course in
feedback control, 2003
82/82