S. Kanev
This paper was not presented at any IFAC meeting. This paper
was recommended for publication in revised form by Associate Editor
Yasumasa Fujisaki under the direction of Editor R. Tempo.
-control (Palhares
et al., 1996; Peres & Palhares, 1995; Zhou et al., 1995),
pole-placement in LMI regions (Chilali, Gahinet, &
Apkarian, 1999; Scherer et al., 1997), etc. These, however,
require that the system state is measurable, thus imposing a
0005-1098/$ - see front matter ? 2004 Elsevier Ltd. All rights reserved.
doi:10.1016/j.automatica.2004.01.028
ARTICLE IN PRESS
2 S. Kanev et al. / Automatica ( )
severe restriction on the class of systems to which they are
applicable.
Similar extension of most of the output-feedback con-
troller design approaches to the structured uncertainty case
is, unfortunately, not that simple due to the fact that the con-
troller parametrization explicitly depends on the state-space
matrices of the system, which are unknown (Apkarian &
Gahinet, 1995; Gahinet, 1996; Masubuchi, Ohara, & Suda,
1998; Scherer et al., 1997). Clearly, whenever the un-
certainty is unstructured (e.g. high-frequency unmodeled
dynamics), it can be recast into the general linear fractional
transformation (LFT) representation and using the small
gain theorem the design objective can be translated into
controller design in the absence of uncertainty (Zhou &
Doyle, 1998). Application of this approach to systems with
structured uncertainty, i.e. disregarding the structure of the
uncertainty, often turns out to be excessively conservative.
To overcome this conservatism j-synthesis was developed
(Zhou & Doyle, 1998; Balas, Doyle, Glover, Packard, &
Smith, 1998), which consists of an iterative procedure
(known as DK iteration) where at each iteration two
convex optimizations are executedone in which the con-
troller K is kept xed, and one in which a certain diagonal
scaling matrix D is kept xed. This procedure, however,
is not guaranteed to converge to a local optimum because
optimality in two xed directions does not imply optimality
in all possible directions, and it may therefore lead to con-
servative results (VanAntwerp, Braatz, & Sahinidis, 1997).
Recently, some attempts have been made towards the
development of LMI-based approaches to output-feedback
controller design for systems with structured uncertainties in
the contexts of robust quadratic stability with disturbance at-
tenuation (Kose &Jabbari, 1999b), linear parameter-varying
(LPV) systems (Kose & Jabbari, 1999a), positive real syn-
thesis (Mahmoud & Xie, 2000), and H
control (Xie,
Fu, & de Souza, 1992). In Kose and Jabbari (1999b) the
authors develop a two-stage procedure for the design of
output-feedback controllers for continuous-time systems and
provide conditions under which the two stages of the design
can be solved sequentially. These conditions, however, re-
strict the class of systems that can be dealt with by the pro-
posed approach to minimum-phase, left-invertible systems.
The same idea has been used in (Kose &Jabbari, 1999a), but
extended to deal with LPV systems in which only some of
the parameters are measured and the others are treated as un-
certainty. In Mahmoud and Xie (2000) the output-feedback
design of positive real systems is investigated by expressing
the uncertainty in an LFT form and recasting the problem to
a simplied, but still non-linear, problem independent of the
uncertainties. A possible way, based on eigenvalue assign-
ment, to solve the non-linear optimization problem is pro-
posed that determines the output-feedback controller. This
approach is applicable to square systems only. In the case
when the uncertainty consists of one full uncertainty block
it was shown in Xie et al. (1992) how the problem can be
transformed into a standard H
/pole-placement
state-feedback gain F is designed. This gain F is con-
sequently kept xed during the design of the remaining
state-space matrices of the dynamic output-feedback con-
troller. Although the rst step is convex, the second one
remains non-convex. However, by constraining a Lyapunov
function for the closed-loop system to have a block-diagonal
structure, this second step is easily transformed into an LMI
optimization problem.
The paper is organized as follows. In Section 2 the nota-
tion is dened and the problem is formulated. The proposed
algorithm for locally optimal controller design is next pre-
sented in Section 3. For the purposes of its initialization, an
approach to initially feasible controller computation is pro-
posed in Section 4, where a multiobjective criterion is con-
sidered. A summary of the complete algorithm is given in
Section 5. In Section 6 the design approach is tested on a
case study with a linear model of a space robotic manipu-
lator and, in addition, a comparison is made between sev-
eral existing methods for local BMI optimization. Finally,
Section 7 concludes the paper.
2. Preliminaries and problem formulation
2.1. Notation
The symbol in LMIs will denote entries that follow from
symmetry. In addition to that the notation S,m(A) =A+A
i=1
A
i
= A
1
A
n
,diag(A
1
, A
2
, . . . , A
n
).
Also, t
i
will denote the ith element of the vector t. For two
matrices AR
mn
and BR
q
, A BR
mnq
denotes
the Kronecker product of A and B.
The projection onto the cone of symmetric positive-
denite matrices is dened as
[A]
+
= arg min
S0
|A S|
F
. (1)
Similarly, the projection onto the cone of symmetric
negative-denite matrices is dened as
[A]
= arg min
S60
|A S|
F
. (2)
This projection has the following properties.
Lemma 1 (Properties of the projection). For a symmetric
matrix A, the following properties hold:
(1) A = [A]
+
+ [A]
,
(2) [A]
+
, [A]
) = 0,
ARTICLE IN PRESS
4 S. Kanev et al. / Automatica ( )
(3) Let A=UU
T
, where U is an orthogonal matrix con-
taining the eigenvectors of A, and is a diagonal ma-
trix with the eigenvalues z
i
, i =1, . . . , n, of A appearing
on its diagonal. Then [A]
+
= U diagz
+
1
, . . . , z
+
n
U
T
,
with z
+
i
= max(0, z
i
), i = 1, . . . , n. Equivalently,
[A]
= U diagz
1
, . . . , z
n
U
T
, with z
i
= min(0, z
i
),
i = 1, . . . , n.
(4) [A]
+
and [A]
are continuous in A.
For a proof see (Calaore & Polyak, 2001).
In the remaining part of this section we summarize some
existing results for system analysis and controller synthesis
which lie at the basis of the developments in the next Section.
2.2. H
2
and H
is the dis-
turbance to the system, and where the symbol o repre-
sents the s-operator (i.e. the time-derivative operator) for
continuous-time systems, and the :-operator (i.e. the shift
operator) for discrete-time systems. Dene the matrix
M
A
an
,
_
A
A
B
A
C
A
D
A
_
, (4)
where the subscript an denotes that it will be used for
the purposes of analysis only. Later on, a similar matrix
for the synthesis problem will be dened. The matrices
(A
A
, B
A
, C
A
, D
A
) in (3) are assumed unknown, not measur-
able, but are known to lie in a given convex set M
an
, dened
as
M
an
, co
__
A
1
B
1
C
1
D
1
_
, . . . ,
_
A
N
B
N
C
N
D
N
__
. (5)
Next, the transfer function from to : is denoted as
1
A
:
(o) ,C
A
(oI
n
A
A
)
1
B
A
+ D
A
. (6)
The following Lemma, which can be found in e.g., Chilali
et al. (1999), can be used to check whether the eigenvalues
of a matrix are all located inside an LMI region.
Lemma 2. Let A be a real matrix, and dene the LMI
region
D ,: C : L
D
+ S,m(:M
D
) 0, (7)
for some given real matrices L
D
= L
T
D
and M
D
. Then
z(A) D if and only if there exists a matrix P =P
T
0
such that
L
D
P + S,m(M
D
(PA)) 0. (8)
The class of LMI regions, dened in Eq. (8), is fairly
generalit can represent convex regions that are symmetric
with respect to the real axis (Gahinet et al., 1995).
In Scherer et al. (1997) and Masubuchi et al. (1998) LMI
conditions are provided for the evaluation of the H
2
and
H
_
P PA
A
PB
A
P 0
I
_
_
(9)
Lemma 3 (H
2
norm). Assume that D
A
= 0. Then
sup
M
A
an
M
an
|1
A
:
(o)|
2
2
norm).
sup
M
A
an
M
an
|1
A
:
(o)|
2
_
ox = A
A
x + B
A
+ B
A
u
u,
: = C
A
:
x + D
A
:
+ D
A
:u
u,
, = C
A
,
x + D
A
,
+ D
A
,u
u,
(12)
where the signals x, , and : have the same meaning and the
same dimensions as in (3), and where u R
m
is the control
action, and , R
_
A
A
B
A
B
A
u
C
A
:
D
A
:
D
A
:u
C
A
,
D
A
,
D
A
,u
_
_
, (13)
where the subscript syn denotes that it will now be used
for the purposes of synthesis. We also dene the convex set
M
syn
, co
_
_
_
_
A
i
B
, i
B
u, i
C
:, i
D
:, i
D
:u, i
C
,, i
D
,, i
D
,u, i
_
_
: i = 1, . . . , N
_
_
.
(14)
Interconnected to system (12) is the following full-order
dynamic output-feedback controller
C
o
:
_
ox
c
= A
c
x
c
+ B
c
,
u = Fx
c
(15)
with x
c
R
n
its state. This yields the closed-loop system
o x = A
A
cl
x + B
A
cl
,
: = C
A
cl
x + D
A
cl
, (16)
where it is denoted x
T
= [x
T
, (x
c
)
T
], and
_
A
A
cl
B
A
cl
C
A
cl
D
A
cl
_
,
_
_
A
A
B
A
u
F B
A
B
c
C
A
,
A
c
+ B
c
D
A
,u
F B
c
D
A
,
C
A
:
D
A
:u
F D
A
:
_
_. (17)
Denoting the transfer function from the disturbance to the
controlled output :, corresponding to the state-space model
(16), as
1
A
cl
(o) ,C
A
cl
(oI
2n
A
A
cl
)
1
B
A
cl
+ D
A
cl
, (18)
this paper addresses the following problem.
Multiobjective design: Given positive scalars :
2
and :
2
,
,A
c
,B
c
,F
:
2
2
+ :
s.t.
H
2
: sup
M
A
syn
M
syn
|L
2
(1
A
cl
(o) D
A
cl
)R
2
|
2
2
2
H
: sup
M
A
syn
M
syn
|L
1
A
cl
(o)R
|
2
PP : z(A
A
cl
) D, M
A
syn
M
syn
, (19)
where the matrices L
2
, R
2
, L
, and R
i=1
F
(k)
i0
x
i
+
N
2
)=1
F
(k)
0)
y
)
+
N
1
i=1
N
2
)=1
F
(k)
i)
x
i
y
)
, (20)
where F
(k)
i)
= (F
(k)
i)
)
T
, i = 0, 1, . . . , N
1
, ) = 0, 1, . . . , N
2
,
k = 1, . . . , M are given symmetric matrices. In this paper
we consider the following BMI optimization problem:
(P) :
_
_
min , over x, y, and
s.t. BMI
(k)
(x, y) 60, k = 1, . . . , M,
c, x) +d, y) 6,
x 6x 6 x, , 6y 6 ,,
(21)
ARTICLE IN PRESS
6 S. Kanev et al. / Automatica ( )
where x, x R
N
1
and ,, , R
N
2
are given vectors with nite
elements. This problem is known to be NP-hard (Toker &
(x, y) ,
_
N
k=1
BMI
(k)
(x, y)
_
+
2
F
0. (23)
From the denition of the projection [.]
+
, and from the prop-
erties of the Frobenius norm we can write
t
(x, y)
=
N
k=1
_
BMI
(k)
(x, y)
_
+
2
F
,
N
k=1
t
(k)
(x, y).
In this way we have rewritten the BMI feasibility problem
(FP) as an optimization problem, where the goal is now to
nd a local minimum of t
(x, y) is
not convex. Even worse, it may have multiple local minima.
Now, if (x
opt
, y
opt
) is a local minimum for t
(x
opt
, y
opt
)=0, then (x
opt
, y
opt
) is also a feasible solution to
(FP). However, if (x
opt
, y
opt
) is such that t
(x
opt
, y
opt
) 0,
then we cannot say anything about the feasibility of (FP).
The idea is then to start from a feasible solution for a given
, and then apply the method of bisection over to achieve
a local minimum with a desired precision, at each iteration
searching for a feasible solution to (FP). A more exten-
sive description of this bisection algorithm is provided in
Section 5.
Let us now concentrate on the problem of nding a local
solution to
min
x, y
t
R
qq
, G = G
T
, and [ : R
qq
R dened as [(M) =
|[M]
+
|
2
F
, the function
([ G)(C) ,|[G(C)]
+
|
2
F
,
is dierentiable, and its gradient
([ G)(C) ,
_
c
cC
1
,
c
cC
2
, ,
c
cC
N
t
_
T
([ G)(C),
is given by
c
cC
i
([ G)(C) = 2
_
[G(C)]
+
,
c
cC
i
G(C)
_
. (26)
Proof. Using the properties of the projection [.]
+
we infer
for any symmetric matrices G and G, that
[ (G + G) =|G + G [G + G]
|
2
F
=min
S60
|G + G S|
2
F
6|G + G [G]
|
2
F
=|[G]
+
+ G|
2
F
=|[G]
+
|
2
F
+ 2[G]
+
, G) +|G|
2
F
.
On the other hand,
[ (G + G) =|G + G [G + G]
|
2
F
=|[G]
+
+ [G]
+ G [G + G]
|
2
F
|[G]
+
|
2
F
+ 2[G]
+
, G) + 2[G]
+
, [G]
)
+2[G]
+
, [G + G]
) |[G]
+
|
2
F
+ 2[G]
+
, G).
Thus we have [ (G + G) = [ G + 2[G]
+
, G) +
o(|G|
F
) for any symmetric G.
Now, take G(C) , G(C + C) G(C). Since G(C) is
continuously dierentiable it follows that
G(C + C) = G(C) +
N
t
i=1
_
c
cC
i
G(C)
_
C
i
+ o(|C|
2
).
ARTICLE IN PRESS
S. Kanev et al. / Automatica ( ) 7
Therefore,
([ G)(C + C)
=([ G)(C) + 2
N
t
i=1
__
[G(C)]
+
,
c
cC
i
G(C)
_
C
i
_
+o(|C|
2
).
Hence ([G) is dierentiable and its partial derivatives are
given by expressions (26).
The partial derivatives of the function t
(x, y) = 2
N
k=1
_
_
BMI
(k)
(x, y)
_
+
, F
(k)
i0
+
N
2
)=1
F
(k)
i)
y
)
_
c
cy
)
t
(x, y) = 2
N
k=1
_
_
BMI
(k)
(x, y)
_
+
, F
(k)
0)
+
N
1
i=1
F
(k)
i)
x
i
_
Note that these partial derivatives are continuous functions
(see Lemma 1), so that t
C
1
. Note also, that a lower
bound on the cost function in (21) can always be obtained
by solving the so-called relaxed LMI optimization problem
(Tuan & Apkarian, 2000)
LB
= min
x,y
c, x) +d, y),
subject to : x [x, x], y [,, ,], w
i)
[w
i)
, w
i)
]
F
(k)
00
+
N
1
i=1
F
(k)
i0
x
i
+
N
2
)=1
F
(k)
0)
y
)
+
N
1
i=1
N
2
)=1
F
(k)
i)
w
i)
60 for k = 1, 2, . . . , M, (27)
where w
i)
= minx
i
,
)
, x
i
,
)
, x
i
,
)
, x
i
,
)
, and w
i)
=
maxx
i
,
)
, x
i
,
)
, x
i
,
)
, x
i
,
)
.
If this problem is not feasible, then the original BMI prob-
lem is also not feasible.
Nowthat it was shown that the function t
is C
1
and an ex-
pression for its gradient has been derived, the cautious BFGS
method (Li & Fukushima, 2001), a quasi-Newton type op-
timization algorithm, could be used for nding a local min-
imum of t
C
1
. The convergence of this algorithm is es-
tablished in Li and Fukushima (2001) under the assumption
that, (a) the level set O = x, y : t
(x, y) 6t
(x
(0)
, y
(0)
)
is bounded, (b) t
_
x x
y y
_
2
, (x, y), ( x, y) O.
For the problem considered in this section the level set
O is compact (see Eq. (21)), so that condition (a) holds.
Condition (b) was shown in Theorem 5. Condition (c) fol-
lows by observing that the projection [.]
+
is Lipschitz, and
hence, since BMI
(k)
(x, y) is smooth, the functions in par-
tial derivatives ct
}cx
i
and ct
}cy
)
satisfy a local Lipschitz
condition. The compactness of the set O then implies the
desired global Lipschitz condition.
Note that the optimization problem discussed above ap-
plies to a more general class of problems with smooth
non-linear matrix inequality (NMI) constraints. However,
nding an initially feasible solution to start the local op-
timization is a rather dicult problem, for which reason
NMI problems fall outside the scope of this paper. It also
needs to be noted here that any algorithm with guaranteed
convergence to a local minimum could be used instead of
the BFGS algorithm.
In the next section we focus on the problem of nding an
initial feasible solution to the BMI optimization problem.
4. Initial robust multiobjective controller design
In this section, a two-step procedure is presented for the
design of an initial feasible robust output-feedback con-
troller. It can be summarized as follows:
Step 1: Design a robust state-feedback gain matrix F such
that the multiobjective criterion of form (19) is satised for
the closed-loop system with state-feedback control u = Fx.
This problem is convex and is considered in Section 4.1.
Step 2: Plug the state-feedback gain matrix F, computed
at Step 1, into the original closed-loop system (16), and
search for a solution to the multiobjective control problem,
dened in Eq. (19), in terms of the remaining unknown
controller matrices A
c
and B
c
. This problem, in contrast to
the one in Step 1 above, remains non-convex. It is discussed
in Section 4.2.
In the remaining part of this section we proceed with
proposing a solution to the problems in the two steps above.
4.1. Step 1: Robust multiobjective state-feedback design
The state-feedback case for system (12) is equivalent to
taking C
A
,
=I
n
, D
A
,
=0
nn
, D
A
,u
=0
nm
, so that , x. Fur-
thermore, we consider the constant state-feedback controller
u = Fx, which results in the closed-loop transfer function
1
A
cl
(o) ,D
A
:
+ (C
A
:
+ D
A
:u
F)
(oI
n
(A
A
+ B
A
u
F))
1
B
A
. (28)
The following theorem can be used for robust multiobjective
state-feedback design for discrete-time and continuous-time
systems. The proof follows after rewriting Lemmas 3 and
4 for the closed-loop system (28) as LMIs in Q = P
1
,
with subsequent change of variables. It will be omitted here
(Scherer et al., 1997; Oliveira et al., 2002).
ARTICLE IN PRESS
8 S. Kanev et al. / Automatica ( )
Theorem 6 (State-feedback case). Consider system (12),
and assume that C
A
,
=I
n
, D
A
,
=0
nn
, D
A
,u
=0
nm
. Consider
the controller u = Fx resulting in the closed-loop transfer
function
1
A
cl
(o), dened in (28). Given L
2
, R
2
, L
, and R
,
the conditions
sup
M
A
syn
M
syn
|L
2
(
1
A
cl
(o) D
A
:
)R
2
|
2
2
2
,
sup
M
A
syn
M
syn
|L
1
A
cl
(o)R
|
2
,
z(A
A
+ B
A
u
F) D, M
A
syn
M
syn
(29)
hold if there exist matrices Q=Q
T
, W=W
T
, R=R
T
, and
L such that for all i = 1, . . . , N the following LMIs hold:
PP : (Q) (L
D
Q + S,m(M
D
A
i
)) 0
H
2
: (
2
trace(R))
_
R L
2
C
i
Q
_
(30)
_
_
_
S,m(A
i
) B
, i
R
2
I
_
0 (cont. case),
_
_
Q A
i
B
, i
R
2
Q 0
I
_
_
0 (discr. case).
(31)
H
:
_
_
_
_
S,m(A
i
) B
, i
R
C
T
i
L
T
I R
T
D
T
:, i
L
T
I
_
Q0 (continuous case)
_
_
Q A
i
B
, i
R
0
Q 0 C
T
i
L
T
I R
T
D
T
:, i
L
T
I
_
_
0,
(discrete case)
(32)
where A
i
, A
i
Q + B
u, i
L and C
i
, C
:, i
Q + D
:u, i
L. The
state-feedback gain matrix F is then given by F =LQ
1
.
4.2. Step 2: Robust multiobjective output-feedback design
In what follows we assume that the optimal state-feedback
gain F has already been computed at Step 1. In contrast to
Step 1, the problem dened in Step 2 of the algorithm at the
beginning of Section 4 is certainly non-convex in the vari-
ables P, W, A
c
, and B
c
since application of Lemmas 3 and
4 to the closed-loop system in Eq. (17) leads to non-linear
matrix inequalities due to the fact that the variables A
c
and
B
c
appear in the closed-loop system matrices A
A
cl
and B
A
cl
(for which reason the last two are typed in boldface).
Note that the function J = x
T
P x acts as a Lyapunov
function for the closed-loop system. This can easily be
seen by observing that the matrix inequalities in Lemmas
3 and 4, when applied to the closed-loop system (16) im-
ply (A
A
cl
)
T
PA
A
cl
P 0 for the discrete-time case, and
PA
A
cl
+ (A
A
cl
)
T
P 0 for the continuous-time case.
The purpose of this section is to show how by introducing
some conservatism by means of constraining the Lyapunov
matrix P to have block-diagonal structure
P =X Y, (33)
the non-linear matrix inequalities in question can be written
as LMIs. However, it can easily be seen that a necessary
condition for the existence of a structured Lyapunov matrix
of form (33) for A
A
cl
dened in (17) is that the matrix A
A
is stable for all M
A
syn
M
syn
. Luckily, this restriction can be
removed by introducing a change of basis of the state vector
of the closed-loop system
x = 1 x =
_
x
x x
c
_
. (34)
This changes the state-space matrices of the closed-loop
system to
A
A
cl
= 1A
A
cl
1
=
A
A
+ B
A
u
F B
A
u
F
A
A
+ B
A
u
F B
c
C
A
,
A
c
B
c
D
A
,u
F A
c
+ B
c
D
A
,u
F B
A
u
F
B
A
cl
= 1B
A
cl
=
_
B
A
B
A
B
c
D
A
,
_
C
A
cl
= C
A
cl
1 =
_
C
A
:
+ D
A
:u
F D
A
:u
F
D
A
cl
= D
A
cl
= D
A
:
.
Now, searching for a structured Lyapunov matrix for this
(equivalent) closed-loop system only necessitates the sta-
bility of the matrix (A
A
+ B
A
u
F) for all M
A
syn
M
syn
, which
is guaranteed to hold by the design of the state-feedback
gain F.
We are now ready to present the following result.
Theorem 7 (Output-Feedback Case). Consider the closed-
loop system (16), with transfer function 1
A
cl
(o) dened
in Eq. (18), formed by interconnecting plant (12) with
the dynamic output-feedback controller (15), in which the
state-feedback gain matrix F is given. Then given matrices
L
2
, R
2
, L
, and R
2
,
sup
M
A
an
M
an
|L
1
A
cl
(o)R
|
2
,
z(A
A
cl
) D, M
A
an
M
an
. (35)
ARTICLE IN PRESS
S. Kanev et al. / Automatica ( ) 9
hold if there exist matrices W = W
T
, X = X
T
, Y = Y
T
,
Z and G such that the following system of LMIs has a
feasible solution for all i = 1, . . . , N:
PP : (P) (L
D
P + S,m(M
D
M
i
)) 0,
H
2
: (
2
trace(W))
_
W L
2
O
i
P
_
(36)
_
_
_
S,m(M
i
) N
i
R
2
I
_
0 (cont. case),
_
_
P M
i
N
i
R
2
P 0
I
_
_
0 (discr. case).
(37)
H
:
_
_
_
_
S,m(M
i
) N
i
R
O
T
i
L
T
I R
T
D
T
:, i
L
T
I
_
P 0 (continuous case),
_
_
P M
i
N
i
R
0
P 0 O
T
i
L
T
I R
T
D
T
:, i
L
T
I
_
_
0,
(discrete case),
(38)
where the matrices M
i
, N
i
, P, and O
i
are dened as
M
i
,
X(A
i
+B
u, i
F) XB
u, i
F
Y(A
i
+B
u, i
F)ZG(C
,, i
+ D
,u, i
F) Z+GD
,u, i
FYB
u, i
F
,
(39)
N
i
,
_
XB
, i
YB
, i
GD
,, i
_
, P ,
_
X
Y
_
,
O
i
,
_
C
:, i
+ D
:u, i
F D
:u, i
F
.
Furthermore, the controller (15) with A
c
=Y
1
Z and B
c
=
Y
1
G achieves (35).
Proof. For the sake of brevity, only an outline of the proof
is given. Application of Lemmas 3 and 4 to the closed-loop
system matrices results in the bilinear terms P
A
A
cl
, and P
B
A
cl
from the matrices M
CT
(
A
A
cl
,
B
A
cl
, P) and M
DT
(
A
A
cl
,
B
A
cl
, P),
dened in (9). Clearly, with P dened as in (33) we can
write
P
A
A
cl
=
X(A
A
+B
A
u
F) XB
A
u
F
Y(A
A
+B
A
u
F)YB
c
(C
A
,
+D
A
,u
F)YA
c
YA
c
+YB
c
D
A
,u
FYB
A
u
F
P
B
A
cl
=
_
XB
A
YB
A
YB
c
D
A
,
_
.
Making the one-to-one change of variables
_
YA
c
YB
c
=
_
Z G
results in P
A
cl, i
= M
i
, and P
B
cl, i
= N
i
, with the matri-
ces M
i
and N
i
dened as in (39), being linear in the new
variables.
5. Summary of the approach
We next summarize the proposed approach to robust
dynamic output-feedback controller design.
Algorithm 1 (Robust output-feedback design). Use the re-
sult in Theorem 7 to nd an initially feasible controller,
represented by the variables (x
0
, y
0
,
0
) related to the corre-
sponding BMI problem (21). Set (x
, y
,
(0)
UB
)=(x
0
, y
0
,
0
).
Solve the relaxed LMI problem (27) to obtain
(0)
LB
. Se-
lect the desired precision (relative tolerance) TOL and the
maximum number of iterations allowed k
max
. Set k = 1.
Step 1: Take
k
=
_
(k1)
UB
+
(k1)
LB
_
}2, and solve the prob-
lem (x
k
, y
k
)=arg min t
k
(x, y) starting with initial condition
(x
, y
).
Step 2: If t
k
(x
k
, y
k
)=0 then set (x
, y
,
(k)
UB
)=(x
k
, y
k
,
k
)
else set
(k)
LB
=
k
.
Step 3: If [
(k)
UB
(k)
LB
[ TOL[
(k)
UB
[ or k k
max
then stop
((x
, y
,
(k)
UB
) is the best (locally) feasible solution with the
desired tolerance) else set k k + 1 and go to Step 1.
Note, that
LB
at each iteration represents an infeasible
value for , while
UB
a feasible one. At each iteration of
the algorithm the distance between these two bounds is re-
duced in two. It should again be noted that if for a given
k
the optimal value for the cost function t
k
(x
k
, y
k
) is non-zero
then the algorithm assumes
k
as infeasible. Since the algo-
rithm converges to a local minimum it may happen that the
original BMI problem is actually feasible for this
k
(e.g.
corresponding to the global optimum) but the local optimiza-
tion is unable to conrm feasibilityan eect that cannot
be circumvented.
6. An illustrative example
6.1. Locally optimal robust multiobjective controller
design
The example considered consists of a linear model of one
joint of a real-life space robot manipulator (SRM) system,
taken from Kanev and Verhaegen (2000). The state-space
ARTICLE IN PRESS
10 S. Kanev et al. / Automatica ( )
Table 1
The nominal values of the parameters in the linear model of one joint of
the SRM
Parameter Sym. Value
Gearbox ratio N 260.6
Joint angle of inertial axis O variable
Eective joint input torque 1
e
)
variable
Motor torque constant K
t
0.6
The damping coecient [ [0.36, 0.44]
Deformation torque of gearbox 1
def
variable
Inertia of the input axis I
m
0.0011
Inertia of the output system I
son
400
Joint angle of the output axis c variable
Motor current i
c
variable
Spring constant c (117000, 143000)
model of the system is given by
x(t) =
_
_
0 1 0 0
0 0
c
N
2
I
m
0
0 0 0 1
0
[
I
son
c
N
2
I
m
c
I
son
[
I
son
_
_
x(t)
+
_
_
0
K
t
NI
m
0
K
t
NI
m
_
_
u(t),
,(t) =
_
0 N 0 0
1 0 1 0
_
x(t) +
_
1
0
_
(t),
:(t) =
_
1 0 1 0
x(t) + (t).
The system parameters are given in Table 1. Note that
the damping coecient [ and the spring constant c are con-
sidered uncertain. A Bode plot of the open-loop system for
dierent values of the two uncertain parameters is given in
Fig. 1.
The objective (see Fig. 2) is to nd a controller that
achieves for all possible values of the uncertain parame-
ters a disturbance rejection of at least 1:100 for constant
disturbances on the shaft angular position (i.e. :(t)) of the
motor (such as, e.g., load), and a bandwidth of at least
1 (rad/s). This can be achieved by selecting the follow-
ing performance weighting function (see the upper curve
on Fig. 3) W
(s)S(s)|
(s).
10
1
10
2
10
-5
10
0
10
5
L
o
g
M
a
g
n
i
t
u
d
e
Frequency (rad/sec)
Bode plots of perturbed open-loop system u y
2
10
1
10
2
-100
-50
0
50
100
P
h
a
s
e
(
d
e
g
)
Frequency (rad/sec)
Fig. 1. Bode plot of the perturbed open-loop transfer from u to ,
2
.
SRM Controller
W
P
d
u
y
1
y
2
z
augmented system
Fig. 2. Closed-loop system with the selected weighting function W
(s).
10
-4
10
-2
10
0
10
2
10
4
10
-6
10
-4
10
-2
10
0
10
2
10
4
Plot of weighting function W
P
(s) and closed-loop sensitivity S(s)
frequency (rad/sec)
L
o
g
M
a
g
n
i
t
u
d
e
W
-1
P
(s)
S(s)
Fig. 3. Sensitivity function of the closed-loop system for the nominal
values of the parameters and the inverse of the weighting function W
.
It should be noted here that this problem is of a rather
large scale: the BMI optimization problem (21) consists of
4 bilinear matrix inequalities, each of dimension 12 12,
and each a function of 95 variables (40 for the controller
parameters, and 55 for the closed-loop Lyapunov matrix).
Also note, that the number of complicating variables, dened
in (Tuan & Apkarian, 2000) as mindim(x), dim(y), in
this example equals 40. This makes it clear that the problem
ARTICLE IN PRESS
S. Kanev et al. / Automatica ( ) 11
Table 2
Performance achieved by the ve local BMI approaches applied to the
model of SRM
Method Achieved
opt
NEW 0.6356
RMA
MC 0.8114
PATH infeas.
DK 0.8296
is far beyond the capabilities of the global approaches to
solving the underlying BMI problem, which can at present
deal with no more than just a few complicating variables.
First, using the result in Theorem 7 an initial controller
was found achieving an upper bound of
, init
= 1.0866,
which was subsequently used to initialize the newly pro-
posed BMI optimization (see Algorithm 1). The tolerance
of TOL=10
3
was selected. The new algorithm converged
in 10 iterations to
, NEW
= 0.6356. The computation took
about 100 min on a computer with a Pentium IV CPU
1500 MHz and 1 Gb RAM.
Next, four other algorithms were tested on this exam-
ple with the same initial controller, the same tolerance
and the same stopping conditions. These algorithms were
rank minimization approach (RMA) (Ibaraki & Tomizuka,
2001), the method of centers (MC) (Goh et al., 1995), the
path-following method (PATH) (Hassibi et al., 1999), and
the alternating coordinate method (DK) (Iwasaki, 1999).
The results are summarized in Table 2. From among these
four approaches only two were able to improve the initial
controller, namely the MC which achieved
, MC
=0.8114
in about 610 min, and the DK iteration that terminated in
about 20 min with
, DK
= 0.8296. The MC method was
unable to improve the performance further due to numerical
problems. Similar problems were reported in (Fukuda &
Kojima, 2001). The PATH converged to an infeasible so-
lution due to the fact that the initial condition is not close
enough to the optimal one, so that the rst-order approxi-
mation that is made at each iteration is not accurate. Finally,
the RMA method was also unable to nd a feasible solution.
This experiment shows that after initializing all BMI
approaches with the same controller, the newly proposed
method outperforms the other compared methods by achiev-
ing the lowest value for the cost function. On the other
hand, the initial controller itself also achieves a value for
the cost function that is rather close to the optimal costs ob-
tained by the DK and the MC methods, i.e. these methods
were not able to signicantly improve the initial solution.
This implies that the initial controller design method could
provide a good initial point for starting a local optimization.
For the newly proposed method, the upper and the lower
bounds on at each iteration are plotted in Fig. 4. Note that
at each iteration the upper bound represents a feasible value
for , and the lower boundan infeasible one. Also plotted
on the same gure are the values achieved by the DK itera-
1 2 3 4 5 6 7 8 9 10
-0.2
0
0.2
0.4
0.6
0.8
1
Upper and lower bounds at each iteration.
iteration number
L
B
a
n
d
U
B
NEW: 0.63561
MC: 0.8114
DK: 0.8296
Fig. 4. Upper and lower bounds on during the BMI optimization.
tion and the MC methods. We note that the diculties that
some of the other local approaches experienced is mainly
due to the large scale of the problem that causes numerical
diculties and very slow convergence.
The optimal controller obtained after the execution of the
newly proposed method has form (15). With this optimal
controller, the closed-loop sensitivity function is depicted in
Fig. 3, together with the inverse of the selected performance
weighting function W
1
(s), imply-
ing that the desired robust performance has been achieved.
7. Conclusions
In this paper a new approach to the design of locally opti-
mal robust dynamic output-feedback controllers for systems
with structured uncertainties was presented. The uncertainty
is allowed to have a very general structure and is only as-
sumed to be such that the state-space matrices of the sys-
tem belong to a certain convex set. The approach is based
on BMI optimization that is guaranteed to converge to a lo-
cally optimal solution provided that an initially feasible con-
troller is given. This algorithm enjoys the useful properties
of computational eciency and guaranteed convergence to
a local optimum. An algorithm for fast computation of an
initially feasible controller is also provided and is based on a
two-step procedure, where at each step an LMI optimization
problem is solvedone to nd the optimal state-feedback
gain and one to nd the remaining state-space matrices of
the output-feedback controller. The design objectives con-
sidered are H
2
, H
con-
troller was designed. In addition, the proposed approach was
compared to several existing approaches on a simpler BMI
ARTICLE IN PRESS
12 S. Kanev et al. / Automatica ( )
optimization and it became clear that it can act as a good
alternative for some applications.
Acknowledgements
This work is sponsored by the Dutch Technology Foun-
dation (STW) under project number DEL.4506.
References
Apkarian, P., & Gahinet, P. (1995). A convex characterization of
gain-scheduled H
robust
ltering for convex bounded uncertain systems. IEEE Transactions
on Automatic Control, 46(1), 100107.
Goh, K.-C., Safonov, M. G., & Ly, J. H. (1996). Robust synthesis
via bilinear matrix inequalities. International Journal of Robust and
Nonlinear Control, 6(910), 10791095.
Goh, K.-C., Safonov, M. G., & Papavassilopoulos, G. P. (1995). Global
optimization for the biane matrix inequality problem. Journal of
Global Optimization, 7(4), 365380.
Grigoradis, K. M., & Skelton, R. E. (1996). Low-order control design
for LMI problems using alternating projection methods. Automatica,
32(8), 11171125.
Hassibi, A., How, J., & Boyd, S. (1999). A path-following method for
solving BMI problems in control. In Proceedings of the American
control conference, San Diego, CA (pp. 13851389).
Hol, C. W. J., Scherer, C. W., van der Mech e, E. G., & Bosgra, O. H.
(2003). A nonlinear SPD approach to xed order controller synthesis
and comparison with two other methods applied to an active suspension
system. European Journal of Control, 9(1), 1126.
Ibaraki, S., & Tomizuka, M. (2001). Rank minimization approach for
solving BMI problems with random search. In Proceedings of the
American control conference, Arlington, VA (pp. 18701875).
Iwasaki, T. (1999). The dual iteration for xed order control. IEEE
Transactions on Automatic Control, 44(4), 783788.
Iwasaki, T., & Rotea, M. (1997). Fixed order scaled H
synthesis.
Optimal Control Applications and Methods, 18(6), 381398.
Iwasaki, T., & Skelton, R. E. (1995). The XY-centering algorithm for
the dual LMI problem: A new approach to xed order control design.
International Journal of Control, 62(6), 12571272.
Kanev, S., & Verhaegen, M. (2000). Controller reconguration for
non-linear systems. Control Engineering Practice, 8(11), 12231235.
Kose, I. E., & Jabbari, F. (1999a). Control of LPV systems with partly
measured parameters. IEEE Transactions on Automatic Control,
44(3), 658663.
Kose, I. E., & Jabbari, F. (1999b). Robust control of linear systems with
real parametric uncertainty. Automatica, 35(4), 679687.
Kothare, M. V., Balakrishnan, V., & Morari, M. (1996). Robust
constrained model predictive control using linear matrix inequalities.
Automatica, 32(10), 13611379.
Leibfritz, F. (2001). An LMI-based algorithm for designing suboptimal
static H
2
}H
discrete-time controllers.
In Proceedings of the 35th IEEE conference on decision and control,
Kobe, Japan (pp. 14591496).
Palhares, R. M., Takahashi, R. H. C., & Peres, P. L. D. (1997). H
2
and H
state feedback
control for discrete-time linear systems. In Second Latin American
seminar on advanced control, LASAC95. Chile (pp. 7378).
Scherer, C., Gahinet, P., & Ghilali, M. (1997). Multiobjective
output-feedback control via LMI optimization. IEEE Transactions On
Automatic Control, 42(7), 896911.
Toker, O., &
Ozbay, H. (1995). On the NP-hardness of solving bilinear
matrix inequalities and simultaneous stabilization with static output
feedback. In Proceedings of the American control conference, Seattle,
WA (pp. 25252526).
Tuan, H. D., & Apkarian, P. (2000). Low nonconvexity-rank bilinear
matrix inequalities: Algorithms and applications in robust controller
and structure designs. IEEE Transactions on Automatic Control,
45(11), 21112117.
Tuan, H. D., Apkarian, P., Hosoe, S., & Tuy, H. (2000a). D.C.
optimization approach to robust control: Feasibility problems.
International Journal of Control, 73(2), 89104.
Tuan, H. D., Apkarian, P., & Nakashima, Y. (2000b). A new
Lagrangian dual global optimization algorithm for solving bilinear
matrix inequalities. International Journal of Robust and Nonlinear
Control, 10, 561578.
VanAntwerp, J. G., & Braatz, R. D. (2000). A tutorial on linear and
bilinear matrix inequalities. Journal of Process Control, 10, 363385.
VanAntwerp, J. G., Braatz, R. D., & Sahinidis, N. V. (1997). Globally
optimal robust control for systems with time-varying nonlinear
perturbations. Computers and Chemical Engineering, 21, S125S130.
Xie, L., Fu, M., & de Souza, C. E. (1992). H
control
with constant diagonal scaling. IEEE Transactions on Automatic
Control, 43(2), 191203.
Yamada, Y., Hara, S., & Fujioka, H. (2001). c-feasibility for H
control
problems with constant diagonal scalings. Transactions of the Society
of Instrument and Control Engineers, E-1(1), 18.
Zhou, K., & Doyle, J. (1998). Essentials of robust control. Englewood
Clis, NJ: Prentice-Hall.
Zhou, K., Khargonekar, P. P., Stoustrup, J., & Niemann, H. H. (1995).
Robust performance of systems with structured uncertainties in
state-space. Automatica, 31(2), 249255.
Stoyan Kanev received his M.Sc. degree in
Control and Systems Engineering from the
Technical University of Soa, Bulgaria, in
July 1998. The work on his masters the-
sis and the defence took place at the Con-
trol Lab of the Delft University of Technol-
ogy, the Netherlands. In the period October
1999 until December 2003 he was doing his
Ph.D. course at the University of Twente,
Netherlands.
As of January 2004 he works as a post-
doctoral researcher in the Delft Center for
Systems and Control at the Delft University of Technology. His research
interests include fault-tolerant control, randomized algorithms, linear and
bilinear matrix inequalities.
Carsten Scherer received his diploma de-
gree and the Ph.D. degree in mathematics
from the University of Wurzburg (Ger-
many) in 1987 and 1991, respectively. In
1989, Dr. Scherer spent 6 months as a
visiting scientist at the Mathematics Insti-
tute of the University of Groningen (The
Netherlands). In 1992, he was awarded a
grant from the Deutsche Forschungsgemein-
schaft (DFG) for 6 months of post doctoral
research at the University of Michigan (Ann Arbor) and at Washington
University (St. Louis), respectively. In 1993 he joined the Mechanical
Engineering Systems and Control Group at Delft University of Technol-
ogy (The Netherlands) where he held a position as associate professor
(universitair hoofddocent). In fall 1999 he spent a 3 months sabbatical as
a visiting professor at the Automatic Control Laboratory of ETH Zurich.
Since fall 2001 he is a full professor (Antoni van Leeuwenhoek hoogler-
aar) at Delft University of Technology. His main research interests cover
various topics in applying optimization techniques for developing new ad-
vanced controller design algorithms and their application to mechatronics
and aerospace systems.
Michel Verhaegen received his engineering
degree in aeronautics from the Delft Uni-
versity of Technology, The Netherlands, in
August 1982, and the doctoral degree in ap-
plied sciences from the Catholic University
Leuven, Belgium, in November 1985. Dur-
ing his graduate study, he held a research
assistenship sponsored by the Flemish In-
stitute for scientic research (IWT).
From 1985 to 1987 he was a postdoctoral
Associate of the US National Research
Council (NRC) and was aliated with the
NASA Ames Research Center in California. After an assistent professor-
ship at the department of Electrical Engineering of the Delft University of
Technology he was from 1990 until 1994 a senior Research Fellow of the
Dutch Academy of Arts and Sciences. During that time he was aliated
with the Network Theory Group of the Delft University of Technology.
In the period 19941999 he was Associate Professor of the control lab-
oratory of the Delft University of Technology and was appointed a full
professor at the faculty of Applied Physics of the university of Twente
in the Netherlands in 1999. From 2001 on Prof. Verhaegen moved back
to the University of Delft and is now co-chairing the new Delft Cen-
ter for Systems and Control. He has held short sabbatical leaves at the
University of Uppsala, McGill, Lund and the German Aerospace Re-
search Center (DLR) in Munich and is participating in several European
Research Networks.
His main research interest is the interdisciplinary domain of numerical
algebra and system theory. In this eld he has published over 100 papers.
Current activities focus on the transfer of knowledge about new identi-
cation and controller design methodologies to industry. Application areas
include the process industry, active vibration/noise control, fault tolerant
control and signal processing for control.
Prof. Verhaegen received a Best Presentation Award at the American
Control Conference, Seattle, WA, 1986 and a Recognition Award from
NASA in 1989. He is project leader for four long term research projects
sponsored by the European Community, the Dutch National Science Foun-
dation (STW) and the Dutch industry.
Bart De Schutter received his degree in
electrotechnical-mechanical engineering in
1991 and the doctoral degree in Applied
Sciences (summa cum laude with congrat-
ulations of the examination jury) in 1996,
both at K.U.Leuven, Belgium. Currently, he
is Associate Professor at the Delft Center
for Systems and Control of Delft University
of Technology in Delft, The Netherlands.
Bart De Schutter was awarded with the 1998
SIAM Richard C. DiPrima Prize and the
1999 K.U.Leuven Robert Stock Prize for his
Ph. D. thesis. His current research interests include hybrid systems control,
multi-agent systems, control of transportation networks, and optimization.