Control
Bin Yao
1
School of Mechanical Engineering, Purdue University, West Lafayette, Indiana
47907-2088, USA. byao@purdue.edu
2
Chang Jiang Chair Professor, The State Key Laboratory of Fluid Power
Transmission and Control, Zhejiang University, Hangzhou 310027, China
The increasingly challenging control performance requirements of modern
technology force control engineers to look beyond traditional linear con-
trol theory for more advanced nonlinear controllers. There has been an
exponential growth in nonlinear control research during the past decades
[12], with major advances reported in both the nonlinear adaptive con-
trol (AC) [4, 17, 2022, 24, 39, 41] and deterministic robust control (DRC)
[3, 5, 9, 18, 25, 35, 49]. Systematic and constructive nonlinear control design
methodologies such as the backstepping technique [13] have been proposed.
In [4244], Yao and Tomizuka developed a mathematically rigorous adap-
tive robust control (ARC) approach which bridges the gap between two of
the main control research areas nonlinear robust adaptive control (RAC)
and deterministic robust control (DRC). Traditionally, those two research ar-
eas are presented as competing control design approaches, as each has its
own benets and limitations. The ARC approach recognizes the fundamen-
tally dierent working mechanisms of those two approaches and eectively
integrates the two design methodologies. The proposed ARC approach pre-
serves the theoretical performance results of both AC and DRC. Their results
complement each other and thus naturally overcome the well-known practical
performance limitations of AC and DRC. The poor transients and non-robust
performance of AC to disturbances is replaced by the guaranteed transients
and robust performance results as in DRC. Moreover, the design conservative-
ness of DRC is supplemented by the excellent steady-state tracking accuracy
results from ACasymptotic tracking in the presence of parametric uncertain-
ties without using discontinuous control action or nonlinear feedback gains
approaching innity. Other key papers on ARC include [7, 29,31, 32,38, 45,46].
ARC has been shown to be a powerful approach to mechatronic system con-
trol in [26,27,33,34,36,37,40,47]. It should be noted that ARC is a completely
dierent control approach than robust adaptive control pursued by other re-
searchers [44].
2 Bin Yao
The signicant performance improvement of the proposed ARC in various
implementations makes the approach an ideal choice for industrial applica-
tions demanding stringent performance. At the same time, by-products of the
integrated approach [31, 37] accurate parameter and nonlinearity estima-
tions make adding intelligent features such as machine component health
monitoring and prognosis possible. It is thus benecial for control engineers
to get exposed to such an advanced nonlinear control design methodology and
learn how to use the method to build intelligent and yet precision mechatronic
systems.
To avoid overwhelming technical design complexity, in this chapter, the
tracking control of a rst-order nonlinear system with uncertainties will be
used to illustrate the advantages and limitations of traditional AC, DRC, and
the proposed ARC.
1 Problem Formulation
Consider an uncertain nonlinear system described by
x = f(x, t) + u, f = (x)
T
+ (x, t) (1)
where x R and u R are the system output and input respectively, and f
is an unknown nonlinear function. In reality, it might be dicult to determine
the exact form of the nonlinear function f(x, t), so it is represented with two
parts in (1). The rst part represents its approximation by a group of known
basis functions (x) R
p
with unknown weights R
p
; physically, this
part normally represents the terms derived by physical laws or certain kinds
of approximation of which forms are usually known but magnitudes may not
be known in advance. The second part represents the approximation error.
Before starting the controller design, it is necessary to make some rea-
sonable assumptions on prior plant knowledge. The more we assume known
about the plant, i.e., the more restrictive the assumptions are, the better the-
oretically achievable control performance can be obtained. However, if the
assumptions are too restrictive, it is less likely that they can be satised by
actual physical systems, which will make the methodology useless. With these
considerations in mind, the following reasonable and practical assumption is
made:
A1. The extent of parametric uncertainties and uncertain nonlinearities is
known and given by
= { :
min
< <
max
}
p
(x)). Furthermore, they have nite values when all their variables except t
are nite (e.g., x L
= (x, t) L
).
Let x
d
(t) be the desired output trajectory, which is assumed to be bounded
with a bounded derivative. Denote the tracking error as z = x x
d
(t). The
objective is to synthesize a bounded control law for u so that the tracking
error z(t) is made as small as possible.
2 Feedback Linearization
Before presenting the control methods for systems with uncertainties, let us
rst consider the idealistic situation when the system does not have any un-
certainties, i.e., assuming that = 0 and is perfect known in (1). With this
assumption, the feedback linearization technique [10], which uses nonlinear
feedback control terms to cancel all nonlinear eects of the process to make
the resulting closed-loop system behave like a linear system, can be used to
synthesize the following control law
u = (x)
T
+ v (3)
in which the rst term is used to cancel the eect of physical nonlinearity in
(1) and v can be thought as a new virtual control input. By doing so, the
resulting closed-loop system from the virtual input v to the output is given
x = f(x, t) + u = v (4)
which is linear and time-invariant. For perfect tracking of a time-varying de-
sired trajectory x
d
(t) (i.e., x(t) = x
d
(t), t), the system dynamics (4) deter-
mines that a control action of v = v
m
= x
d
(t) is needed. However, simply
using this action is not sucient as the resulting closed-loop system dynam-
ics would be described by z = 0, which is not stable. Thus some stabilizing
feedback v
s
is also needed to address non-zero initial tracking error eect.
For simplicity, let v
s
= kz, a proportional feedback of z with k being any
positive gain. The resulting control law for v is then given by
v = v
m
+ v
s
, v
m
= x
d
(t), v
s
= kz, k > 0 (5)
which leads to the following exponentially stable error dynamics
z = kz z(t) = z(0)exp(kt) (6)
The nal form of the control law (3) with (5) is thus given by
u = u
m
+ u
s
, u
m
= x
d
(t) (x)
T
, u
s
= kz (7)
where u
m
can be thought as the correct model compensation that is needed for
the system (1) to track a time-varying trajectory x
d
(t) perfectly, and u
s
is a
stabilizing feedback control action to ensure that the tracking error dynamics
with the model compensation u
m
are globally uniformly stable.
4 Bin Yao
3 Adaptive Control (AC)
Consider now the situations where the system has parametric uncertainties
only, i.e., assuming (x, t) = 0 in (1). For such a scenario, the adaptive control
(AC) [15, 23] design can be used to synthesized a controller as follows.
Linear Stabilizing Feedback
Parameter Adaptation Law
Plant Model Compensation
'
x
z
+
-
d
x
x
d
m
u
u
T
d
x
T
x u
k
s
u
Fig. 1. Adaptive control schematic
Adaptive control seeks to identify unknown parameters on-line so that
the eect of model uncertainties can be eventually eliminated for zero steady-
state tracking errors. The control law can be synthesized as though the system
does not have any model uncertainties when the on-line parameter estimate
(t) is used. With this design philosophy, as shown in Fig.1, the control law
has the same form as (7) except with
(t) for :
u = u
m
+ u
s
, u
m
= x
d
(t) (x)
T
(t), u
s
= kz (8)
which results in the following tracking error dynamics when (x, t) = 0:
z + kz = (x)
T
(t) (9)
where
(t) =
(t) represents the parameter estimation error vector. The
eect of the right hand side of (9) can be eliminated if the following gradient
type parameter adaptation law is used to update the parameter estimate
vector
(t):
(t) = (x)z,
(0)
(10)
where is any symmetric positive denite (s.p.d.) adaptation rate matrix.
With this adaptation law, a non-negative function V
a
=
1
2
z
2
+
1
2
1
and
Barbalats lemma [16], it is easy to obtain the following theorem [42]:
Introduction to Adaptive Robust Control 5
Theorem 1. In the presence of parametric uncertainties only (i.e., = 0),
with the adaptive control law (8) and the parameter adaptation law (10), all
signals in the system are bounded and the tracking error asymptotically con-
verges to zero, i.e., z 0 when t .
In addition, if the desired trajectory satises the following persistent excit-
ing (PE) condition
T, t
0
,
p
> 0 s.t.
t+T
t
(x
d
())(x
d
())
T
d
p
I
p
, t t
0
(11)
then, the parameter estimates
asymptotically converge to their true values
(i.e.,
0 when t ).
4 Deterministic Robust Control (DRC)
Deterministic robust control (DRC) [25, 49] utilizes precisely known system
characteristics (e.g., the sign of a switching function based on feedback mea-
surements and the known upper bound of model uncertainties in sliding mode
control (SMC) [25]) to synthesize nonlinear feedback which overpowers all
types of model uncertainty eects to achieve not only robust stability but
also certain robust performance as well. For the system given by (1), since the
goal is to make z as small as possible, it is thus important to look at how the
control input inuences z:
z = x x
d
(t) = (x)
T
+ (x, t) + u x
d
(t) (12)
Noting the ideal control law (7), when model uncertainties (2) exist, it is
natural to use the following control structure for system (1)
u = u
m
+ u
s
, u
m
= x
d
(t) (x)
T
o
, u
s
= u
s1
+ u
s2
, u
s1
= kz (13)
where
o
(x, t)] (14)
where
o
=
o
. The left side of (14) represents the stable nominal closed-
loop dynamics. The terms inside the brackets in (14) represent the eects of
all model uncertainties. Though these terms are unknown, by assumption A1,
they are bounded above with some known functions h(x, t):
|(x)
T
o
(x, t)| h(x, t) (15)
For example, let h(x, t) = |(x)|
T
|
max
min
| + (x, t), in which | | for a
vector is dened componentwise. With these known system characteristics, a
robust feedback u
s2
can be chosen as
6 Bin Yao
u
s2
= h(x, t)sgn(z) (16)
where sgn() denotes the discontinuous sign function dened as
sgn() =
_
_
_
1 > 0
undetermined value between -1 and 1 if = 0
1 < 0
(17)
Such a feedback can dominate the eect of all model uncertainties as seen
from the fact that
z
_
u
s2
[(x)
T
o
(x, t)]
_
h(x, t)|z| +|z||(x)
T
o
(x, t)|
h(x, t)|z| +|z|h(x, t)
= 0
(18)
This type of control law is commonly referred to as the ideal SMC law. The
following theorem summarizes the theoretical control performance of such a
feedback control law:
Theorem 2. With the SMC law (13), all signals in the system (1) are bounded
and the output tracking error z exponentially converges to zero.
The ideal SMC control law (16) leads to severe actuator chattering problem
and may lead to instability in practice [25] since it contains the discontinuous
function sgn(z). To overcome this problem, one may replace the discontinuous
control action hsgn(z) by a continuous approximation function S (hsgn(z))
that satises the following two conditions:
i. zS (hsgn(z)) 0
ii. z [hsgn(z) S (hsgn(z))] (t)
(19)
where (t) is any bounded time-varying positive scalar (i.e., 0 < (t)
M
for
some
M
) which can be thought as a measure of the approximation accuracy.
The rst condition in (19) preserves the stabilizing characteristics of the robust
feedback and the second represents the approximation accuracy requirement.
The smoothed SMC law (16) thus becomes
u = u
m
+ u
s
, u
m
= x
d
(t) (x)
T
o
,
u
s
= u
s1
+ u
s2
, u
s1
= kz, u
s2
= S (hsgn(z))
(20)
The schematic of such a control law is shown in Fig.2. The following theorem
summarizes the theoretical control performance of such a continuous robust
feedback control law:
Theorem 3. With the continuous robust control law (20), all signals in the
system (1) are bounded and the output tracking is guaranteed to have a pre-
scribed transient and steady-state performance in the sense that the tracking
Introduction to Adaptive Robust Control 7
Linear Stabilizing Feedback
Nonlinear Robust Feedback
Plant
Fixed Model
Compensation
-
'
x
d
m
u
u
0
T
d
x
T
x u '
u
s
u
s2
2
( , )
s
u z t
u
s1
k
x
z
-
d
x
Fig. 2. Deterministic robust control schematic
error z is bounded above by a known function which exponentially converges
to a specied accuracy as
|z(t)|
2
|z(0)|
2
exp(2kt) + 2
t
0
exp(2k(t )()d
|z(0)|
2
exp(2kt) +
M
k
[1 exp(2kt)]
(21)
Remark 1. Some specic approximation functions which satisfy (19) are given
below.
S1: As in most smoothed SMC schemes [23, 42], the continuous saturation
function sat
_
z
z
_
can be used to replace sgn(z) to obtain a continuous
approximation of the ideal discontinuous control action hsgn(z). In order
to take into account the time-varying nature of h(x, t), the strength of the
discontinuity, a time-varying boundary layer thickness given by
z
=
4
h
can be used. With this approximation,
S (hsgn(z)) = h(x, t) sat
_
h
4
z
_
(22)
Obviously, (22) satises condition i of (19). When |z|
z
, h|z|
zhsat
_
z
z
_
= 0. When |z|
z
,
h|z| zhsat
_
h
4
z
_
= h|z|
h
2
4
z
2
=
_
1
2
h|z|
_
2
(23)
Thus, condition ii of (19) is satised.
S2: Later, when the methodology is extended from systems of relative degree
one to systems of higher relative degrees, the backstepping design proce-
dure will be used, which needs the derivatives of the control components
at each step recursively. In such a case, a suciently smooth modication
is required. For this purpose, an approximation of sgn() by the smooth
tanh() function can be used. Specically, it is easy to verify that tanh()
function has the following properties:
8 Bin Yao
tanh(0) = 0, tanh() = 1, tanh() = 1
0 |u| utanh(
u
s
)
s
, u R and
s
> 0
(24)
where = 0.2785. Thus, by choosing
s
=
h
and dening
S (hsgn(z)) = h(x, t) tanh
_
h
z
_
(25)
from (24), it is straightforward to verify that (25) satises the conditions
i and ii of (19).
S3: For mathematical convenience, the following simple smooth approximation
can also be used
S (hsgn(z)) =
1
4
h
2
z (26)
which obviously satises condition i. Condition ii of (19) is also satised
by using the completion of squares:
z [hsgn(z) S (hsgn(z))] = h|z|
1
4
h
2
z
2
=
_
1
2
h|z|
_
2
(27)
Note that, for large z, this approximation has an equivalent local gain
with respect to (w.r.t.) z roughly around
1
4
h
2
while the approximations
S1 and S2 have equivalent local gains approaching zero. In this regard, the
approximations S1 and S2 may have a better ability to deal with actuator
saturation.
5 Direct Adaptive Robust Control (DARC)
The preceding analysis shows that AC (or RAC) can asymptotically elimi-
nate the eect of parametric uncertainties (i.e., (x)
T
0 as t in (9))
through on-line parameter adaptation. As a result, zero steady state tracking
error can be achieved without using innite high gain feedback; asymptotic
output tracking is achieved for any gain k in (8) as shown in Theorem 1.
However, such an adaptive control law suers from two practical problems.
First, transient performance of the closed-loop system is not clearly dened
and large tracking errors like that exhibited by the bursting phenomenon
(page 272 of [1]) may occur when the system is subject to small bounded
disturbances. Secondly, the eect of unknown nonlinearities (x, t), such as
external disturbances, are not considered in the design. It is well known that
an integral type adaptation law (10) may suer from parameter drifting and
destabilize the system in the presence of even small disturbances and/or mea-
surement noise when the PE condition (11) is not satised [19]. Considering
that every physical system is always subject to some uncertain nonlinear-
ities/disturbances, safe implementation of the above adaptive controller in
practice is questionable. In contrast to adaptive control, DRC design achieves
both guaranteed transient performance and steady-state tracking accuracy
Introduction to Adaptive Robust Control 9
even in the presence of uncertain nonlinearities through strong nonlinear ro-
bust feedback. This result makes the DRC design attractive to practical appli-
cations. However, asymptotical tracking cannot be attained unless the control
law is allowed to be discontinuous or some equivalent gains in the control law
approach innity [42]; either way is not practical since they will inevitably
excite neglected high frequency unmodeled dynamics. Because of the distinct
benets and practical limitations of DRC and AC, they have been tradition-
ally presented as competing control design approaches [13, 18].
A deeper investigation of the fundamental working mechanisms of the two
approaches is worthwhile. The tracking error dynamics of DRC, (14), are
rewritten as
z + kz + S (hsgn(z)) = [(x)
T
o
(x, t)] (28)
in which the left side represents the nominal closed-loop dynamics and the
right side is purely due to the model compensation error due to various mod-
eling uncertainties. (28) can be considered as a nonlinear lter with the track-
ing error z being the output and the modeling compensation errors as the
inputs. Thus, the essence of DRC design is to construct a proper nonlinear
lter structure (reected by the third term in (28)) that attenuates the eect
of various modeling uncertainties to an acceptable level (as measured by
M
in Theorem 3). Theoretically, the tracking error can be arbitrarily reduced by
choosing an increasingly smaller value of (t). However, this will inevitably
increase the slope of the function S (hsgn(z)) around z = 0 as it is essentially
required to change from h to h when z varies from a small negative value
to a positive value; for example, the approximations S1 (22), S2 (25), and S3
(26) have an equivalent local gain of
1
4
h
2
,
h
2
, and
1
4
h
2
w.r.t. z at z = 0
respectively. Subsequently, the bandwidth of the resulting tracking error dy-
namics increases as follows. Around z = 0, the left side of (28) approaches a
rst order system with transfer function of
1
s+keq
where k
eq
= k + S
(0), in
which S
_
(x)
T
(t) (x, t)
__
(t)
(31)
then, the same theoretical results as in Theorem 3 can be obtained. Fur-
thermore, if the parameter estimates
(t) can be updated via an adapta-
tion law similar to (10) so that the eect of parametric uncertainties can be
asymptotically eliminated in the presence of parametric uncertainties only
(i.e.,
T
0 as t when = 0), then, an improved steady-state track-
ing performance, asymptotic output tracking, is achieved as in AC. It is thus
necessary to nd an adaptation law and a robust feedback to satisfy these
requirements simultaneously.
The traditional adaptation law (10) could lead to unbounded parameter
estimates in the presence of disturbances [19]. As a result, it cannot directly be
used in the adaptive robust control law (29) since no bounded robust control
term u
s2
can be found to attenuate the unbounded model uncertainties in
(31). To solve this problem, some coordination mechanisms should be used to
condition the traditional parameter adaptation law so that only bounded on-
line parameter estimates will be used in the adjustable model compensation
(29) while without aecting its nominal estimation capability. Two ways to
achieve this are given below.
5.1 Smooth Projection Based ARC Design
The rst approach is to use only the bounded smooth projection of the pa-
rameter estimates in the robust control law [28, 43, 44]. Specically, as shown
Introduction to Adaptive Robust Control 11
in Fig.3, let
= (
for
in (29):
u = u
m
+ u
s
, u
m
= x
d
(t) (x)
T
(t), u
s
= u
s1
+ u
s2
, u
s1
= kz (32)
Linear Stabilizing Feedback
Nonlinear Robust Feedback
Modified Adaptation Law Smooth Projection
Plant
Adjustable
Model Compensation
'
x
d
m
u
u
T
d
x
S
T
x u = + A +
u
s
u
s 2
2
( , )
s
u z t
u
s1
( )
( )
z
(
= +
k
x
z
-
d
x
Fig. 3. Smooth projection based adaptive robust control
Such a control law leads to the following error dynamics
z + kz = u
s2
[(x)
T
= [
1
, . . . ,
p
]
T
be a vector of arbitrarily small positive real numbers.
As shown in Fig.4, for each parameter estimate
i
, there exists a real-valued,
suciently smooth nondecreasing function
i
having the following properties
i
(
i
) =
i
,
i
i
= { :
imin
imax
}
i
(
i
)
i
= { :
imin
i
imax
+
i
},
i
R
(34)
with bounded derivatives up to suciently high order.
Now dene the smooth projection mapping vector : R
p
R
p
for the
parameter estimates vector
= [
1
, . . . ,
p
]
T
by
(
) = [
1
(
1
), . . . ,
p
(
p
)]
T
(35)
Then,
(
) =
,
= { :
min
max
}
(
= { :
min
max
+
},
R
p
(36)
Through the use of the smooth projections,
i i
S T
min i
T
max i
T
i
T
min i
T
max i
T
i T
H
Fig. 4. A non-decreasing suciently smooth projection map
type of parameter adaptation law to be used. As a result, a bounded robust
feedback u
s2
can be synthesized in the same way as in DRC to meet the
following robust performance requirements
i. zu
s2
0
ii. z
_
u
s2
_
(x)
T
(t) (x, t)
__
(t)
(37)
Specically, since (
,
R
p
, noting assumption A1, there always
exists a known function h(x, t) which bounds above the total on-line model
compensation errors:
|(x)
T
) R
p
be any vector of functions that satises the following conditions
Introduction to Adaptive Robust Control 13
i. l
) = 0, if
ii.
) 0, if else
(39)
where
=
_
l
) +(x)z
_
(40)
Essentially, condition i of (39) is to make sure that no modication is made
when the parameter estimates are in the known region where the actual pa-
rameters lie to preserve the learning capability of original parameter adap-
tation law. Condition ii of (39) states the fact that the modications are
nonlinear damping-like terms with respect to the parameter estimation er-
rors to enhance the robustness of the parameter estimation process by elim-
inating/reducing the drifting problem of the pure integral type of parameter
adaptation law (10). For cubic type of
i
) be any func-
tion having the shape shown in Fig. 5. Then l
) :=
_
l
1
(
1
), . . . , l
p
(
p
)
_
T
satises (39)
i
l
T
i
T
min i
T
max i
T
Fig. 5. A nonlinear damping-like adaptation rate modication
The following theorem shows that all these modications do not alter the
nominal learning capability of the original parameter adaptation law.
Theorem 4. With the ARC law (29) and parameter adaptation law (40), the
following results hold:
A. In general, all physical signals in the system are bounded and all the results
in Theorem 3 are obtained, i.e., the output tracking is guaranteed to have
a prescribed transient and steady-state performance in the sense that the
tracking error is bounded above by a known function exponentially con-
verges to the ball {z() : |z()| <
M
k
} with a converging rate no less
than k.
B. If after a nite time, the system is subjected to parametric uncertainties
only (i.e., (x, t) = 0 after a nite time), in addition to the results stated
14 Bin Yao
in A, all the results in Theorem 1 are obtained as well, i.e., the tracking er-
ror asymptotically converges to zero. Furthermore, when the PE condition
(11) is satised, the parameter estimates converge to their true values.
Theorem 4 shows that ARC retains the theoretical results of both DRC and
AC. Naturally, this overcomes the drawbacks of each. The main drawbacks
of AC undened transient performance and non-robustness to uncertain
nonlinearities are overcome by result A of Theorem 4. The drawback of DRC
poor steady-state tracking performance is overcome by the asymptotic
tracking performance in B of Theorem 4. In this sense, ARC is a synergistic
integration of DRC and AC designs. Note that one of the nice features of ARC
is that the underlying control law is a DRC type robust one. The adaptation
loop can be switched o at any time without aecting closed-loop stability
since the resulting control law is a DRC one and results in part A of Theorem
4 are still valid.
5.2 Discontinuous Projection Based ARC Design
The smooth projection based ARC design presented in the previous subsec-
tion suers from the drawback that the internal parameter estimates
(t)
may still become unbounded when uncertain nonlinearities exist. Though this
will not aect the stability of the actual closed-loop system, it is nevertheless
better to make the parameter adaptation process more robust to uncertain
nonlinearities, which can be achieved by either some variations of the smooth
-modication developed in [42] or the discontinuous projection method from
the robust adaptive control (RAC) [29, 42]. Variations of the -modication
method used in [6, 42] suers from the problem that the bound of parameter
estimate
cannot be known in advance. Thus, no robust control law with a
pre-specied can satisfy (37). Consequently, the achievable transient per-
formance cannot be pre-specied. In contrast, the discontinuous projection
method used in [29, 42] guarantees that
(t) stays in a known bounded region
at all time. Thus, it does not have the unbounded parameter problem that
is associated with the smooth projection based ARC design in the previous
subsection. Furthermore, it can be easily implemented when the adaptation
rate matrix is chosen to be diagonal. Because of these practical benets,
the discontinuous projection method has been used in all subsequent ARC de-
signs and applications [26, 30, 34, 45, 46]. With this modication, the resulting
adaptation law becomes
= Proj
(z)
(41)
where the projection mapping Proj
() =
_
Proj
1
(
1
), . . . , Proj
p
(
p
)
_
T
and
Proj
i
(
i
) =
_
_
_
0 if
_
i
=
imax
and
i
> 0
i
=
imin
and
i
< 0
i
otherwise
(42)
Introduction to Adaptive Robust Control 15
Property 1. The projection mapping (41) has the following properties [42]
P1.
(t)
= {
:
min
max
}, t
P2.
T
_
1
Proj
()
0,
(43)
min
| +(x, t). Such a control law
is graphically shown in Fig. 6 and leads to the error dynamics given by (30).
Linear Stabilizing Feedback
Nonlinear Robust Feedback
Projection type Adaptation Law
Plant
Adjustable
Model Compensation
'
x
d
m
u
u
T
d
x
T
x u '
u
s
u
s 2
2
( , )
s
u z t
u
s1
Proj z
k
x
z
-
d
x
Fig. 6. Discontinuous projection based adaptive robust control
Theorem 5. With the ARC law (29) and the discontinuous projection based
parameter adaptation law (41), the following results hold:
A. In general, all signals in the system are bounded and all the results in
Theorem 3 are obtained, i.e., the output tracking is guaranteed to have
a prescribed transient and steady-state performance in the sense that the
tracking error is bounded above by a known function exponentially con-
verges to the ball {z() : |z()| <
M
k
} with a converging rate no less
than k.
16 Bin Yao
B. If after a nite time, the system is subjected to parametric uncertainties
only (i.e., (x, t) = 0 after a nite time), in addition to the results stated
in A, all the results in Theorem 1 are obtained as well, i.e., the tracking er-
ror asymptotically converges to zero. Furthermore, when the PE condition
(11) is satised, the parameter estimates converge to their true values.
Remark 2. With the discontinuous projection (42), the parameter adaptation
law (41) becomes a set of dierential equations having discontinuous right
hand side functions with respect to
. As the only known sucient condition
for a dierential equation described by x = f(t, x) to have the local existence
and uniqueness of its solution is that the function f(t, x) is locally Lipschitz in
x (i.e., continuous in x at least), one may raise the issue of the existence and
uniqueness of the solutions of the proposed discontinuous projection based
ARC designs. Such a concern is not an issue in reality due to the following
considerations:
S1. Though the continuous or the smooth projections as described in subsec-
tion 5.1 may be preferred by pure theoreticians, the resulting controller
cannot achieve the same level of performance in implementation as the
proposed discontinuous projection based ARC design as detailed below:
1) Nowadays almost all advanced nonlinear control laws have to be im-
plemented approximately by a digital computer as there is no suitable
hardware to truthfully implement complex nonlinear control laws in
continuous time domain. The essential treat of a digital computer lies
in its ability to implement complex logic decisions in a straightforward
way. In comparison, calculating values of complex nonlinear functions
may need signicant computation time, and, aside from the computa-
tion time issue, implementing a controller described by a set of dier-
ential equations having sti nonlinearities in the right hand may have
the issue of signicant numerical approximation error problem. From
this point of view, implementing the discontinuous projection based
controllers using a digital computer is rather straightforward and bet-
ter conditioned in some sense than the continuous or smooth projection
based controllers. To see this, let us take a closer look at how the two
classes of controllers are actually implemented by a digital computer.
When the continuous projections and modications such as the ones in
(34) and (39) are used in the parameter adaptation law in (40), the re-
sulting controller would be described by a set of dierential equations in
which the right hand side contains those continuous or smooth modi-
cation terms. Aside from the complexity of these modication terms in
terms of real-time computation time needed, these modication terms
tend to be very sti during the transition periods when the parameter
estimates are going out of their known ranges of
, as the thickness
of the boundary layers for the smoothing (e.g.,
in (36)) have to be
very small for a better theoretically guaranteed control performance in
general. The resulting controller is thus normally described by a set of
Introduction to Adaptive Robust Control 17
dierential equations having sti nonlinearities in the right hand side.
It is well-known that these classes of systems are hardly implemented
well numerically by a digital computer. For example, when the Euler
discretization method is used (normally done in actual implementation
due to the much less on-line computation time needed), the parameter
adaptation law (40) would be implemented as
((k + 1)T) =
(kT) +
_
l
) +(x)z
_
t=kT
T (45)
where T is the sampling period,
((k+1)T) and
(kT) are the values of
parameter estimates at the sampling instances (k+1)T and kT respec-
tively, and |
t=kT
represents the value of at the sampling instance
kT. As mentioned previously, when the parameter estimates start going
out of the known bounded ranges of
is quite large due to the use of very small boundary layer thick-
ness for a better theoretically guaranteed control performance. Due to
the capacity limitation of hardware, the sampling period T in imple-
mentation cannot be chosen arbitrarily small. With this in mind, the
moments when these modication terms start acting, signicant larger
amount of numerical approximation errors of the parameter adapta-
tion law (40) by the discretized version (45) in implementation could
exist. This could easily lead to the undesirable chattering problem of
parameter estimates at the boundary when the modication term l
)
starts acting. Furthermore, there is no guaranteed that the resulting
parameter estimates will actually lie within the pre-specied ranges
that the original continuous or smooth modications to the parameter
adaptation law in continuous time-domain are supposed to achieve.
On the other hand, the digital implementation of the proposed dis-
continuous projection law is carried out by a combination of the usual
approximation of dierential equations having no sti nonlinearities
in the right hand side and the logic operations that the discontinuous
projection modication is supposed to achieve in continuous time do-
main. As such it does not have the chattering problem of parameter
estimates at the boundary and the resulting parameter estimates are
kept within the pre-specied ranges of
,
the proposed discontinuous projection based parameter adaptation law
(41) is implemented digitally by
18 Bin Yao
u
((k + 1)T) =
(kT) + z|
t=kT
T
i
((k + 1)T) =
_
_
_
imax
if
iu
((k + 1)T) >
imax
imin
if
iu
((k + 1)T) <
imin
,
iu
((k + 1)T) otherwise
i = 1, . . . , p
(46)
where
u
= [
1u
, . . . ,
pu
]
T
. This implementation not only precisely
guarantees that
(x)
T
(x
d
)
T
( )
T
d m
x x
T
x u '
u
s
u
s 2
2
( , )
s
u z t
u
s1
Proj ( )
m
x z
k
x
z
-
d noise
+
xx
Fig. 7. Direct adaptive robust control with measurement noise
x
m
Linear Stabilizing Feedback
Nonlinear Robust Feedback
Adaptation Law with Desired Regressor
Plant
Adjustable Desired
Model Compensation
'
x
d
m
u
u
( )
T
d d
x x
T
x u '
u
s
u
s 2
2
( , )
s
u z t
u
s1
Proj ( )
d
x z
k
x
z
-
d noise
+
xx
Fig. 8. Desired compensation adaptive robust control
ened robust control u
s
for nominal closed-loop stability is included
u = u
m
+ u
s
, u
m
= x
d
(t)
T
d
(t)
, u
s
= u
s1
+ u
s2
, u
s1
= k
s1
z
= Proj
() , =
d
(t)z
(48)
where k
s1
is any nonlinear gain satisfying
k
s1
k +
(x, t) (49)
u
s2
is required to satisfy robust performance conditions similar to (31) where
(x) is again replaced by
d
(t), i.e.,
i. zu
s2
0
ii. z
_
u
s2
_
(x
d
)
T
(t) (x, t)
__
(t)
(50)
Introduction to Adaptive Robust Control 21
For example, let u
s2
= S (hsgn(z)) where h is any bounding function satis-
fying h |(x
d
)|
T
|
max
min
| + (x, t).
Theorem 6. With the DCARC law (48), the same theoretical results as stated
in Theorem 5 are obtained.
Though the above DCARC law achieves the same theoretical performance
as previously presented DARC laws, as shown in Fig. 8, since the adaptation
function in (48) is linear w.r.t. the feedback signal (through z only), the inte-
gral type adaptation law will cause both the parameter estimates
and u
m
to be insensitive to measurement noise.
5.4 Indirect Adaptive Robust Control (IARC)
The underlying parameter adaptation law (41) in DARC is based on direct
adaptive control designs, in which the control law and the parameter adap-
tation law are synthesized simultaneously through certain stability criteria to
meet the sole objective of reducing the output tracking error. Such a design
normally leads to a controller whose dynamic order is as low as the num-
ber of unknown parameters to be adapted while achieving excellent output
tracking performance [29, 30, 44]. However, the direct approach also has the
drawback that parameter estimates normally do not converge or even ap-
proach their true values fast enough as observed in actual use [34, 45]. Poor
parameter estimate convergence with DARC designs is mainly due to: (i)a
gradient type adaptation law (41) with a constant adaptation rate matrix
that does not convergence as well as the least squares type; (ii)the adaptation
function is driven by the actual tracking error z, which is very small in im-
plementation for a well designed direct adaptive control law. Thus it is more
prone to be corrupted by factors that were neglected during synthesis of the
parameter adaptation law such as the sampling delay and noise; (iii)the PE
condition (11) needed for parameter convergence cannot always be met during
operation. These practical limitations make it impossible for the resulting pa-
rameter estimates to be used for secondary purposes that require more reliable
and accurate on-line parameter estimates such as machine component health
monitoring and prognosis. The indirect adaptive robust control (IARC) de-
sign presented in [38] can be used when more accurate parameter estimates
are needed. It completely separates construction of the parameter estimation
law from the design of the underlying robust control law as detailed below.
One of the key elements of ARC design [29, 42, 44] is to use available prior
process information to construct a projection type adaptation law that results
in a controlled learning process even in the presence of uncertain nonlinearities
or disturbances. In previous ARC designs, discontinuous projection mapping
(42) is used due to its simplicity and ease of implementation. Theoretically
such a discontinuous projection mapping is valid only for a diagonal adapta-
tion rate matrix , which is not a problem for previous DARC designs that
22 Bin Yao
only use gradient type adaptation laws. For IARC introduced here, a least
squares type adaptation law will be used to achieve better convergence of pa-
rameter estimates, which will make the adaptation rate matrix time-varying
and non-diagonal. As such, in principle, (42) cannot be used anymore. In-
stead, the standard projection mapping Proj
()
is
3
Proj
() =
_
_
, if
or n
T
0
_
I
n
n
T
n
T
and n
T
> 0
(51)
where (t) can be any time-varying positive denite symmetric matrix. In
(51),
and
respectively, and
n
. Such a projection
mapping has the same useful properties as the discontinuous one (41), i.e.,
Property (43) still remains valid.
The projection type adaptation law structure (51) gives parameter esti-
mates that are bounded and within known bounds, regardless of the estima-
tion function used. As a result, the same control law as in DARC designs
(i.e., (29) and (31)) can be used to guarantee transient and steady-state out-
put tracking performance, independent of the specic identier to be used.
Thus, to complete the IARC design it is necessary to construct suitable esti-
mation functions which would result in an improved steady-state tracking
accuracy zero steady-state tracking error in the presence of parametric un-
certainties only along with an improved parameter estimation process. For
this purpose, it is assumed that the system is free of uncertain nonlinearities,
i.e., = 0 in (1), so that some linear regression models can be constructed
to apply the standard parameter estimation algorithms in obtaining . The
details are given below and graphically shown in Fig. 9.
The ltered system dynamics (1) are rst obtained via any stable LTI lter
transfer function H
f
(s) with a relative degree no less than 1:
x
f
=
T
f
+ u
f
(52)
where x
f
= H
f
[x],
f
= H
f
[(x)], and u
f
= H
f
[u] are the ltered output,
regressor vector, and input respectively. Dene the measured output of a linear
regression model using actual plant parameters and the predicted output of
the linear regression model using on-line parameter estimates as
y
p
:= x
f
u
f
, y
p
:=
T
f
(53)
3
The details on the denition of convex sets and the projection mapping are given
in Appendix.
Introduction to Adaptive Robust Control 23
Linear Stabilizing Feedback
Nonlinear Robust Feedback
Physical Model Based Parameter Estimation
Plant
Adjustable Model Compensation
'
x
d
m
u
u
T
d
x
T
x u '
u
s
u
s2
2
( , )
s
u z t
u
s1
1 2
Proj (
( )
)
) (
f
T
f f f
f
T
f
t
t
x
t
u
P P
k
x
z
-
d
x
u
x
Fig. 9. Indirect adaptive robust control with physical model based parameter esti-
mation
As H
f
(s) has a relative degree no less than 1, x
f
can be obtained directly from
the implementation of the lter dynamics for x
f
. Thus, both y
p
and y
p
can
be calculated based on the measured state x and the control input u only, so
does the prediction error dened as = y
p
y
p
. Noting that the relationship
between the prediction error and the parameter estimation error is in the
standard linear regression form given by
=
T
f
(54)
various standard recursive estimation algorithms can then be used to ob-
tain the on-line parameter estimates
. For example, when the generalized
parameter estimation algorithm (PAA) with covariance limiting introduced
previously is used, the resulting parameter adaptation law becomes
(t) = (t)
f
(t)(t),
D
=
1
(t)(t)
2
(t)(t)
f
(t)
f
(t)
T
(t)
(t) =
_
D
, if
M
((t)) <
M
or v
T
M
D
v
M
< 0
0 else
(55)
where
1
(t) 0 and
2
(t) are two non-negative functions, v
M
represents the
eigenvector corresponding to
M
((t)) when
M
((t)) =
M
, and
M
is a
pre-set upper bound for the covariance matrix (t). In (55),
1
(t) represents
the forgetting factor with
1
(t) = 0 for no forgetting.
2
(t) is introduced so
that the algorithm incorporates both the standard least squares type esti-
mation algorithm (by setting
2
(t) = 1, t) and the gradient type estimation
algorithm (by setting
2
(t) = 0, t). In general, for stability, it is only required
that 0
2
(t) < 2. It is assumed that the covariance is initialized to a value
of (0) <
M
I
p
before using the parameter adaptation law (55).
24 Bin Yao
Though the design of the indirect adaptive robust control presented above
is straightforward, the proof of asymptotic output tracking in the presence of
parametric uncertainties only is rather dicult due to the dierent regressors
used in the model compensation (i.e., ) and the parameter adaptation law
(i.e.,
f
). In general, the standard parameter estimation algorithm (55) only
guarantees that the prediction error (t) or
T
f
T
) has a nite L
2
norm or converge to zero asymptotically to prove the
asymptotic output tracking as in the direct adaptive robust control designs.
However, when the lter transfer function H
f
(s) has the same relative degree
as the tracking error dynamics, there does exist certain relationship between
their eects as seen from the following theorem summarizing the theoretical
performance of the above IARC design.
Theorem 7. With the IARC control law (29) and the physical model based
parameter estimation (55), when lter transfer function H
f
(s) has a relative
degree of one, the the same theoretical results as stated in Theorem 5 are
obtained.
When the relative degree of the lter transfer function H
f
(s) is more than
that of the tracking error dynamics, one cannot infer the asymptotic output
tracking in general unless the PE condition is satised as summarized in the
following theorem.
Theorem 8. With the IARC control law (29) and the physical model based
parameter estimation (55), when the following PE condition is satised
T, t
0
,
p
> 0 s.t.
t+T
t
f
T
f
d
p
I
p
, t t
0
(56)
in addition to the results in part A of Theorem 5, the parameter estimates
asymptotically converge to their true values and the asymptotic output tracking
is achieved in the presence of parametric uncertainties only.
5.5 Integrated Direct/Indirect Adaptive Robust Control (DIARC)
IARC designs completely separate the parameter estimation law design from
the underlying robust control law to give better parameter estimates than
possible with DARC designs. In addition, explicit on-line signal excitation
level monitoring can be employed to signicantly improve the accuracy of pa-
rameter estimates. In implementation, the discrete time PAA algorithms can
be used as well. With these algorithm improvements, the resulting parameter
estimates could be accurate enough for secondary purposes such as machine
health monitoring and prognosis.
Introduction to Adaptive Robust Control 25
Comparative experimental results [37] show that IARC designs have much
better parameter estimation accuracy than DARCs. However, IARC output
tracking performance may not be as good as that from DARC, especially
during transient periods. A thorough analysis reveals that the poorer IARC
tracking performance is caused by the loss of dynamic compensation type
fast adaptation that is inherent in DARC designs. To overcome this IARC
problem, an integrated direct/indirect ARC (DIARC) design framework is
developed in [31]. As shown in Fig. 10, the design not only uses the same
IARC adaptation process for accurate estimation of physical parameters, but
also introduces dynamic compensation type fast adaptation to achieve better
transient performance.
Linear Stabilizing Feedback
Nonlinear Robust Feedback
Physical Model Based Parameter Estimation
Plant
Adjustable
Model Compensation
'
x
d
1 m
u u
T
d
x
T
x u '
u
s
u
s 2
2
( , )
s
u z t
u
s1
1 2
Proj (
( )
)
) (
f
T
f f f
f
T
f
t
t
x
t
u
P P
k
x
z
-
d
x
u
x
Fast Dynamic Compensation Type Adaptation
Proj
c d
d z J
c
d
-
m
u
Fig. 10. Integrated direct/indirect adaptive robust control
For (1), the resulting DIARC law is:
u = u
m
+ u
s
, u
m
= u
m1
+ u
m2
, u
m1
= x
d
(t)
T
, u
m2
=
d
c
u
s
= u
s1
+ u
s2
, u
s1
= kz, u
s2
= k
s2
(x, t)z
(57)
u
m1
represents the usual model compensation with the physical parameter es-
timates
(t) updated using the IARC design parameter estimator (i.e., (55)).
u
m2
is a term similar to the fast dynamic compensation type model compen-
sation used in DARC design in which
d
c
can be thought of as estimate of the
low frequency component of the lumped model uncertainties dened later.
From (1) and (57), the tracking error dynamics become
z + kz = u
s2
+ u
m2
T
+ (58)
26 Bin Yao
Dene a constant d
c
and time varying
(t) =
T
+ (59)
Conceptually, (59) lumps the model compensation errors due to uncertain
nonlinearities and physical parameter estimation error together and then di-
vides it into a static component (or low frequency component in reality) d
c
and a high frequency components
d
c
= Proj (
d
z) =
_
0 if |
d
c
| = d
cM
and
d
c
z > 0
d
z else
(60)
with
d
> 0 and |
d
c
(0)| d
cM
. Such an adaptation law guarantees that
|
d
c
(t)| d
cM
, t. Substituting (59) into (58) and noting (57),
z + kz = u
s2
+ u
m2
+ d
c
+
(t) =
_
u
s2
d
c
+
(t)
_
(61)
where
d
c
= d
c
d
c
. Due to the use of a projection type adaptation law, all
estimation errors are now bounded within known bounds. Thus, it is possible
to use the same robust feedback synthesis technique as in DARC designs to
construct u
s2
so that the following robust performance conditions are satised
i. zu
s2
0
ii. z
_
u
s2
d
c
+
_
= z
_
u
s2
d
c
T
+ (x, t)
_
(t)
(62)
For example, let u
s2
= S (hsgn(z)) where h is any bounding function satisfy-
ing h d
cM
+||
T
|
max
min
|+(x, t). Thus, the same robust performance
results as in part A of Theorem 5 can be obtained for the output tracking.
Furthermore, similar to Theorems 7 and 8, the following theorems can be
obtained on the asymptotic output tracking in the presence of parametric
uncertainties only.
Theorem 9. Consider the DIARC control law (57) with the physical model
based parameter estimation (55) and the fast dynamic compensation type
adaptation (60). When the lter transfer function H
f
(s) has a relative degree
of one, the the same theoretical results as stated in Theorem 5 are obtained.
b
+ (x, t) (65)
where b =
K
f
m
, (x) = [x
2
, S
f
(x
2
), 1]
T
, = [
1
,
2
,
3
]
T
=
1
m
[B, A
f
, F
dn
]
T
,
=
F
m
, and for simplicity of notation,
T
b
= [
T
, u] and
b
= [
T
, b]
T
. The
linear motor dynamics involve both parametric uncertainties for
b
and un-
certain nonlinearities for (x, t) and their extents are assumed to be known
(2).
Let y
d
(t) be the desired position trajectory and take the position tracking
error as z
1
= x
1
y
d
(t). Let the virtual velocity control input be
1
=
y
d
(t) k
1
z
1
where k
1
represents the position loop proportional feedback gain.
Also denote the velocity loop tracking error as z
2
= x
2
1
. It is clear from
(64) that when z
2
= 0, or x
2
=
1
, z
1
= k
1
z
1
and the position tracking error
28 Bin Yao
z
1
will exponentially converge to zero. Thus, all that remains is to design a
control law such that z
2
(t) converges to zero or within some tolerance. From
(65),
z
2
=
bu +(x)
T
b
(x, u)
T
b
+ (x, t)
1
(66)
Equation (66) is a rst-order dynamics form of (1). As such, all the ARC de-
signs in previous section can be applied directly. All the resulting algorithms
are implemented on the Y-axis of a two-axis precision positioning stage, de-
scribed in [26], that is driven by an Anorad LEM-S-3-S epoxy core linear
motor. Position measurement resolution is 1 m. The same high-speed/high-
acceleration motion trajectory for back-and-forth operation given in [26] is
used in all experiments. The desired trajectory travels 0.4 m with a maxi-
mum speed of 1m/s and acceleration of 12m/sec
2
.
Plots 1 and 2 of Fig. 11 show that the tracking performance of DARC is
very good. Even with large desired acceleration and speed, the tracking error
during the entire run is within 20 m and is only a few micrometers during
the constant velocity portions (e.g., the segment between 36.2 and 36.4 sec
in Plot 2), and reaches to zero almost immediately after each deceleration
(e.g., around 36.6 and 37.9 sec in Plot 2). Better tracking performance can
be obtained by using more sophisticated DARC designs such as the desired
compensation DARC [26]. However, the parameter estimates shown in Plot 3
in Fig.11 do not converge at all, even after more than 8 back-and-forth cycles.
The PE conditions of both DARC and IARC are not met most of the
time; notably, during the constant speed portions of the movement. However,
simple conditioning of the parameter adaptation process switching o the
adaptation of a particular parameter when the component of the regressor
corresponding to the parameter is too small bypasses the problem quite
easily. Plot 3 shows that the parameter estimates of IARC do converge, unlike
with DARC. It is seen that the tracking performance of the proposed IARC
is good as well entire run is mostly within 20 m.
Although IARC has much better parameter estimation accuracy than
DARC, the output tracking performance of IARC may not be as good as
DARC. It is seen in Plot 2 that IARC does have a bit larger transient error
at the beginning and most notably, non-zero tracking error when the system
stops (e.g., the segment between 37.2 and 37.9 seconds) due to the loss of
dynamic compensation type fast adaptation. Such a problem is overcome by
DIARC as seen in the plot. DIARC not only has converging parameter es-
timates like IARC, but also improved tracking performance. Tracking errors
during the zero velocity portions are within the encoder resolution and smaller
transient tracking errors occur as well, especially at the beginning of the run.
These comparative experimental results verify the previous claims.
Introduction to Adaptive Robust Control 29
Fig 1. Tracking Errors Fig 2. Tracking Errors (zoomed in portion)
Fig 3. Parameter Estimates
20 25 30 35 40 45
-4
-2
0
2
x 10
-5
Time
(
a
)
D
A
R
C
20 25 30 35 40 45
-4
-2
0
2
x 10
-5
Time
(
b
)
I
A
R
C
20 25 30 35 40 45
-4
-2
0
2
x 10
-5
Time
(
c
)
D
I
A
R
C
36 36.5 37 37.5 38
-2
0
2
x 10
-5
Time (sec)
(
a
)
D
A
R
C
36.5 37 37.5 38 38.5
-2
0
2
x 10
-5
Time (sec)
(
b
)
I
A
R
C
36 36.5 37 37.5 38
-2
0
2
x 10
-5
Time (sec)
(
c
)
D
I
A
R
C
20 25 30 35 40 45
5
5.5
6
6.5
7
7.5
8
8.5
9
Time (sec)
P
a
r
a
.
E
s
t
.
t
h
e
t
a
1
DARC(dashdot)
IARC(dotted)
DIARC(solid)
True value(dashed)
20 25 30 35 40 45
9
9.5
10
10.5
11
11.5
Time (sec)
P
a
r
a
.
E
s
t
.
t
h
e
t
a
2
DARC(dashdot)
IARC(dotted)
DIARC(solid)
20 25 30 35 40 45
2.4
2.6
2.8
3
3.2
3.4
3.6
3.8
4
Time (sec)
P
a
r
a
.
E
s
t
.
t
h
e
t
a
3
DARC(dashdot)
IARC(dotted)
DIARC(solid)
20 25 30 35 40 45
-10
-5
0
5
10
Time (sec)
P
a
r
a
.
E
s
t
.
t
h
e
t
a
4
DARC(dashdot)
IARC(dotted)
DIARC(solid)
True value(dashed)
Fig. 11. Comparative Experimental Results
30 Bin Yao
References
1. K. J. Astrom and B. Wittenmark. Adaptive control. Addison-wesley Publishing
Company, (second edition), 1995.
2. F. Bu and Han-Shue Tan. Pneumatic brake control for precision stopping
of heave-duty vehicles. IEEE Transactions on Control System Technology,
15(1):5364, 2007.
3. M. J. Corless and G. Leitmann. Continuous state feedback guaranteeing uni-
form ultimate boundedness for uncertain dynamic systems. IEEE Trans. on
Automatic Control, 26(5):11391144, 1981.
4. P. V. Kokotovic Ed. Foundations of Adaptive Control. Springer-Verlag, Berlin:,
1991.
5. R. A. Freeman and P. V. Kokotovic. Design of softer robust nonlinear control
laws. Automatica, 29(6):14251437, 1993.
6. R. A. Freeman, M. Krstic, and P. V. Kokotovic. Robustness of adaptive non-
linear control to bounded uncertainties. Automatica, 34(10):12271230, 1998.
7. J. Q. Gong and Bin Yao. Neural network adaptive robust control of nonlinear
systems in semi-strict feedback form. Automatica, 37(8):11491160, 2001. (the
Special Issue on Neural Networks for Feedback Control).
8. G. C. Goodwin and Shin. Adaptive Filtering Prediction and Control. Englewood
Clis, New Jersey, Prentice-Hall, 1984.
9. J. K. Hedrick and P. P. Yip. Multiple sliding surface control: theory and ap-
plication. ASME Journal of Dynamic Systems, Measurement, and Control,
122(4):586593, 2000.
10. A. Isidori. Nonlinear control systems: An introduction. Springer Verlag, 1989.
11. H. K. Khalil. Nonlinear Systems. (Third Edition). Prentice Hall, Inc., 2002.
12. P. V. Kokotovic. Joy of feedback: Nonlinear and adaptive. Bode Prize Lecture,
1991.
13. M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic. Nonlinear and adaptive
control design. Wiley, New York, 1995.
14. L. Lu, Z. Chen, Bin Yao, and Q. Wang. Desired compensation adaptive
robust control of a linear motor driven precision industrial gantry with im-
proved cogging force compensation. IEEE/ASME Transactions on Mechatron-
ics, 13(6):617624, 2008.
15. K. S. Narendra and A. M. Annaswamy. Robust adaptive control in the presence
of bounded disturbance. IEEE Trans. on Automatic Control, 31:306216, 1986.
16. K. S. Narendra and A. M. Annaswamy. A new adaptive law for robust adap-
tive control without persistent excitation. IEEE Trans. on Automatic Control,
32:134145, 1987.
17. J. B. Pomet and L. Praly. Adaptive nonlinear regulation: estimation from the
lyapunov equation. IEEE Trans. on Automatic Control, 37:729740, 1992.
18. Z. Qu and J. F. Dorsey. Robust control of generalized dynamic systems without
matching conditions. ASME J. Dynamic Systems, Measurement, and Control,
113:582589, 1991.
19. C. Rohrs, L. Valavani, M. Athans, and G. Stein. Robustness of continuous-
time adaptive control algorithms in the presence of unmodeled dynamics. IEEE
Trans. on Automatic Control, 30:881889, 1985.
20. N. Sadegh, R. Horowitz, W. W. Kao, and M. Tomizuka. A unied approach to
the design of adaptive and repetitive controllers for robotic manipulators. ASME
J. of Dynamic Systems, Measurement, and Control, 112(4):618629, 1990.
Introduction to Adaptive Robust Control 31
21. S. Sastry and A. Isidori. Adaptive control of linearizable systems. IEEE Trans.
on Automatic Control, 34:11231131, 1989.
22. J. J. E. Slotine and Weiping Li. Adaptive manipulator control: a case study.
IEEE Trans. on Automatic Control, 33(11):9951003, 1988.
23. J. J. E. Slotine and Weiping Li. Applied nonlinear control. Prentice Hall,
Englewood Clis, New Jersey, 1991.
24. A. R. Teel. Adaptive tracking with robust stability. In Proc. 32nd Conf. on
Decision and Control, pages 570575, 1993.
25. V. I. Utkin. Variable structure systems with sliding modes. IEEE Trans. on
Automatic and Control, 22(2):212222, 1977.
26. Li Xu and Bin Yao. Adaptive robust precision motion control of linear mo-
tors with negligible electrical dynamics: theory and experiments. IEEE/ASME
Transactions on Mechatronics, 6(4):444452, 2001.
27. Li Xu and Bin Yao. Output feedback adaptive robust precision motion control
of linear motors. Automatica, 37(7):10291039, the nalist for the Best Student
Paper award of ASME Dynamic System and Control Division in IMECE00,
2001.
28. Bin Yao. Adaptive robust control of nonlinear systems with application to con-
trol of mechanical systems. PhD thesis, University of California at Berkeley,
Berkeley, USA, 1996.
29. Bin Yao. High performance adaptive robust control of nonlinear systems: a
general framework and new schemes. In Proc. of IEEE Conference on Decision
and Control, pages 24892494, San Diego, 1997.
30. Bin Yao. Desired compensation adaptive robust control. In The ASME Interna-
tional Mechanical Engineers Congress and Exposition (IMECE), DSC-Vol.64,
pages 569575, Anaheim, 1998.
31. Bin Yao. Integrated direct/indirect adaptive robust control of SISO nonlin-
ear systems transformable to semi-strict feedback forms. In American Control
Conference, pages 30203025, 2003. The O. Hugo Schuck Best Paper (Theory)
Award from the American Automatic Control Council in 2004.
32. Bin Yao. Desired compensation adaptive robust control. ASME Journal of
Dynamic Systems, Measurement, and Control, 2009. (accepted in 2006 and in
press).
33. Bin Yao, M. Al-Majed, and M. Tomizuka. High performance robust motion
control of machine tools: An adaptive robust control approach and comparative
experiments. IEEE/ASME Trans. on Mechatronics, 2(2):6376, 1997.
34. Bin Yao, F. Bu, J. Reedy, and G.T.C. Chiu. Adaptive robust control of single-rod
hydraulic actuators: theory and experiments. IEEE/ASME Trans. on Mecha-
tronics, 5(1):7991, 2000.
35. Bin Yao, S. P. Chan, and Danwei Wang. Unied formulation of variable structure
control schemes to robot manipulators. IEEE Trans. on Automatic Control,
39(2):371376, 1994. Part of the paper appeared in the Proc. of American
Control Conference, pp.12821286, 1992.
36. Bin Yao and C. Deboer. Energy-saving adaptive robust motion control of single-
rod hydraulic cylinders with programmable valves. In American Control Con-
ference, pages 48194824, 2002.
37. Bin Yao and R. Dontha. Integrated direct/indirect adaptive robust precision
control of linear motor drive systems with accurate parameter estimations. In
the 2nd IFAC Conference on Mechatronics Systems, pages 633638, 2002.
32 Bin Yao
38. Bin Yao and A. Palmer. Indirect adaptive robust control of SISO nonlinear
systems in semi-strict feedback forms. In IFAC World Congress, T-Tu-A03-2,
pages 16, 2002.
39. Bin Yao and M. Tomizuka. Robust adaptive motion and force control of robot
manipulators in unknown stiness environment. In Proc. of IEEE Conf. on
Decision and Control, pages 142147, San Antonio, 1993.
40. Bin Yao and M. Tomizuka. Comparative experiments of robust and adaptive
control with new robust adaptive controllers for robot manipulators. In Proc.
of IEEE Conf. on Decision and Control, pages 12901295, Orlando, 1994.
41. Bin Yao and M. Tomizuka. Adaptive control of robot manipulators in con-
strained motion. ASME Journal of Dynamic Systems, Measurement and Con-
trol, 117(3):320328, 1995. Part of the paper appeared in the Proc. of American
Control Conference, pp.1128-1132, 1993.
42. Bin Yao and M. Tomizuka. Smooth robust adaptive sliding mode control of
robot manipulators with guaranteed transient performance. ASME J. Dyn.
Syst., Meas., Control, 118(4):764775, 1996. Part of the paper also appeared in
the Proc. of 1994 American Control Conference, pp.11761180.
43. Bin Yao and M. Tomizuka. Adaptive robust control of SISO nonlinear systems
in a semi-strict feedback form. Automatica, 33(5):893900, 1997. (Part of the
paper appeared in Proc. of 1995 American Control Conference, pp2500-2505,
Seattle).
44. Bin Yao and M. Tomizuka. Adaptive robust control of MIMO nonlinear systems
in semi-strict feedback forms. Automatica, 37(9):13051321, 2001.
45. Bin Yao and Li Xu. Observer based adaptive robust control of a class of non-
linear systems with dynamic uncertainties. International Journal of Robust and
Nonlinear Control, 15(11):335356, 2001.
46. Bin Yao and Li Xu. Output feedback adaptive robust control of uncertain linear
systems with disturbances. ASME Journal of Dynamic Systems, Measurement,
and Control, 128(4):19, 2006.
47. Jinghua Zhong and Bin Yao. Adaptive robust repetitive control of piezoelec-
tric actuators. In the ASME International Mechanical Engineers Congress and
Exposition (IMECE), IMECE2005-81967, pages 18, 2005. Finalist of the Best
Student Paper Competition of the ASME Dynamic Systems and Control Divi-
sion (DSCD).
48. X. Zhu, G. Tao, Bin Yao, and J. Cao. Adaptive robust posture control of a
pneumatic muscle driven parallel manipulator. Automatica, 44(9):22482257,
2008.
49. A. S. I. Zinober. Deterministic control of uncertain control system. Peter Pere-
grinus Ltd., London, United Kingdom, 1990.
Introduction to Adaptive Robust Control 33
Appendix: Proofs of All Lemmas and Theorems
Proof. Proof of Theorem 1
Noting (9) and (10), the time derivative of the positive denite (p.d.) function
V
a
=
1
2
z
2
+
1
2
(67)
is
V
a
= z[
T
(x) kz] +
= kz
2
0
(68)
in which the fact that the unknown parameter vector is assumed to be constant
has been used in obtaining
. Thus, t, V
a
(t) V
a
(0), which leads to
z L
and
L
p
t
0
z
2
())d =
1
k
t
0
V
a
())d =
1
k
[V
a
(t) V
a
(0)]
1
k
V
a
(0) (69)
which implies that z L
2
. As z L
and
L
p
. From (9), z L
and
thus z is uniformly continuous. By Barbalats lemma [23], z 0 as t ,
i.e., asymptotic output tracking is achieved. This completes the proof of the
rst part of the theorem.
Since all terms except z in (9) are uniformly continuous, z is uniformly
continuous. Noting
t
0
z())d = z(t) z(0) z(0) as t , by
Barbalats lemma, z 0 as t . Thus, from (9), (x)
T
0 as
t , which indicates that the total model compensation error due to
the parametric uncertainties is asymptotically eliminated by using the pa-
rameter adaptation law (10). For any nite T, the fact that (x)
T
0
as t leads to
t+T
t
()
T
(x())(x())
T
()d 0 as t .
As t , since x(t) x
d
(t) and
t+T
t
()
T
(x())(x())
T
()d 0 is the same as that
(t)
T
t+T
t
(x
d
())(x
d
())
T
d
(t)
p
(t)
2
, which leads to
V
s
= z z = kz
2
+ z
_
u
s2
(x)
T
o
+ (x, t)
_
kz
2
= 2kV
s
(71)
Therefore, using the comparison lemma (page 102 of [11]),
34 Bin Yao
V
s
(t) exp(2kt)V
s
(0), t |z(t)| exp(kt)|z(0)|, t (72)
which shows that z exponentially decays to zero with an exponential converg-
ing rate of k. This completes the proof of Theorem 2.
Proof. Proof of Theorem 3
With the continuous robust control law (20), the error dynamics of the closed-
loop system are given by
z + kz + S (hsgn(z)) = (x)
T
o
+ (x, t) (73)
Following the same steps as in (71) and noting (19), the time derivative of V
s
is now given by
V
s
kz
2
zS (hsgn(z)) +|z|
o
+
kz
2
+ z [hsgn(z) S (hsgn(z))]
2kV
s
+ (t)
(74)
which leads to (21) by applying the comparison lemma. As seen from (21),
theoretically, both the exponentially converging rate k and the steady-state
value
M
k
of the upper bound of the norm of the tracking error z(t) can be
freely adjusted by the controller parameters k and in a known form. The
theorem is thus proved.
Proof. Proof of Theorem 4
Viewing the error dynamics (33) and the robust performance condition ii of
(37), Part A of Theorem 4 can be proved using the same technique as in the
proof of Theorem 3 in the DRC development. The following is to prove Part
B of Theorem 4.
Dene
V
i
(
i
,
i
) =
1
i
i
0
[
i
(
i
+
i
)
i
] d
i
,
i
> 0 & i = 1, . . . , p
(75)
Viewing Assumption A1 (2), i,
i
(0 +
i
)
i
= 0 and thus
i
(
i
+
i
)
i
is a nondecreasing function of
i
passing through the origin. Therefore, V
i
dened by (75) is positive denite w.r.t.
i
. Thus V
dened by
V
, ) =
p
i=1
V
i
(
i
,
i
) (76)
is a positive denite function of
for each
i
V
i
(
i
,
i
) =
1
i
_
i
(
i
+
i
)
i
_
=
1
i
_
i
(
i
)
i
_
(77)
it is straightforward to verify that
, ) =
_
1
1
_
1
(
1
)
1
_
, . . . ,
1
p
_
p
(
p
)
p
__
=
1
(78)
Introduction to Adaptive Robust Control 35
where = diag{
1
, . . . ,
p
}. Now consider the following positive denite func-
tion
V
a
=
1
2
z
2
+ V
, ) (79)
Noting (33), the robust performance condition i of (37), and (78), when = 0,
the time derivative of V
a
with the adaptation rate of (40) is
V
a
= z[kz + u
s2
(x)] +
kz
2
+
_
(x)z +
1
_
= kz
2
)
kz
2
(80)
Thus, t, integrating (80),
t
0
z
2
())d =
1
k
t
0
V
a
())d =
1
k
[V
a
(t) V
a
(0)]
1
k
V
a
(0) (81)
which implies that z L
2
. As z L
and
L
p
. From (33), z L
and
thus z is uniformly continuous. By Barbalats lemma [23], z 0 as t ,
i.e., asymptotic output tracking is achieved. Following the same steps as in
the rest of the proof of Theorem 1, it is straightforward to show that the
parameter estimates converge to their true values when the PE condition (11)
is satised. This completes the proof.
Proof. Proof of Property 1
Whenever
i
reaches to its lower and upper limit, the projection mapping given
by (41) guarantees that
i
always points to the interior or along the tangential
direction of the known range [
imin
,
imax
]. Thus, if
i
(0) (
imin
,
imax
),
then,
i
(t) [
imin
,
imax
], t, which proves P1 of (43).
Since = diag{
1
, . . . ,
p
},
T
_
1
Proj
()
p
i=1
i
_
1
i
Proj
i
(
i
i
)
i
_
(82)
When
i
=
imax
and
i
i
> 0, i.e., the rst condition in (41) is satised, we
have,
i
=
i
=
imax
i
0 and
1
i
Proj
i
(
i
i
)
i
=
i
0, which
indicates that
i
_
1
i
Proj
i
(
i
i
)
i
_
0. Similarly, it can be easily checked
out that for the other two cases in the denition of the projection mapping
(41),
i
_
1
i
Proj
i
(
i
i
)
i
_
0. Thus, P2 of (43) is proved.
Proof. Proof of Theorem 5
Viewing the error dynamics (30) and the robust performance condition ii of
(31), Part A of Theorem 5 can be proved using the same technique as in the
proof of Theorem 3 in the DRC development. The following is to prove Part
B of Theorem 5.
Using the same positive denite function V
a
as in the proof of AC designs,
i.e., (67), noting the robust performance condition i of (78) and the property
36 Bin Yao
P2 of (43), when = 0, the time derivative of V
a
with the adaptation rate of
(41) is
V
a
= z[kz + u
s2
T
(x)] +
kz
2
+
T
_
(x)z +
1
Proj
(z)
kz
2
(83)
Part B of Theorem 5 can thus be proved by following the same steps as in the
rest of the proof of Theorem 1.
Proof. Proof of Theorem 6
With the DCARC law (48), the tracking error dynamics become
z + k
s1
z = u
s2
+(x)
T
(x
d
)
T
[(x
d
)
T
(t) (x, t)] (84)
Thus, noting (47), (49) and ii of the robust performance conditions (50), the
time derivative of the non-negative function V
s
dened by (70) is
V
s
= z
_
k
s1
z +(x)
T
(x
d
)
T
+ z
_
u
s2
[(x
d
)
T
(t) (x, t)]
_
k
s1
z
2
+|z|
V
a
= z
_
k
s1
z +(x)
T
(x
d
)
T
+ z
_
u
s2
(x
d
)
T
(t)
_
+
k
s1
z
2
+|z|
(x, t) |z| +
T
_
(x
d
)z +
1
Proj
((x
d
)z)
kz
2
(86)
Part B of Theorem 5 can thus be proved by following the same steps as in the
rest of the proof of Theorem 1.
Proof. Proof of Theorem 7
As the same control law as in DARC designs is used with the same projection
type parameter adaptation law structure, the proof for part A of Theorem
7 is the same as in Theorem 5. The following is to show asymptotic output
tracking when = 0.
For simplicity, let u
s2
= k
s2
z where k
s2
represents the equivalent non-
linear feedback gain and denote the total nonlinear feedback gain w.r.t. z as
k
s
= k + k
s2
. Then, when = 0, the tracking error dynamics for the IARC
control law (29) can be written in a simple form as
z + k
s
z = (x)
T
(t) (87)
Without loss of generality, assume H
f
(s) =
1
f
s+1
where
f
represents the
lter time constant. Then
f
= H
f
[(x)] is represented in time-domain as
Introduction to Adaptive Robust Control 37
f
=
1
f
[
f
+] (88)
which leads to the following relationship between
T
f
and
T
:
f
d
dt
_
T
f
_
=
T
f
+
T
+
f
T
f
(89)
From (87) and (89), using the completion of squares for the inequality proofs,
the time derivative of the non-negative function V
a
=
1
2
_
z +
f
T
f
_
2
is
V
a
=
_
z +
f
T
f
__
k
s
z
T
T
f
+
T
+
f
T
f
_
= k
s
z
2
f
_
T
f
_
2
(1 +
f
k
s
)z
_
T
f
_
+
f
_
z +
f
T
f
T
f
k
z
z
2
k
T
f
_
2
+ c
T
f
_
2
(90)
where k
z
= k
s
c
c
z
, k
=
f
(1+
f
ks)
2
4c
c
, c
=
2
f
4cz
+
4
f
4c
, and c
,
c
z
, and c
t
0
z
2
d
1
k
zl
_
t
0
V
a
d
t
0
k
T
f
_
2
d +
t
0
c
T
f
_
2
d
_
1
k
zl
_
V
a
(0)
t
0
k
T
f
_
2
d +
t
0
c
T
f
_
2
d
_
(91)
Since all the closed-loop signals have been shown to be uniformly bounded
in part A of the Theorem, k
, c
, and
T
f
in (91) are uniformly bounded. As
shown in Theorem 3 of the chapter on System Identication and Parameter
Estimation, the PAA (55) guarantees that
T
f
L
2
and
L
p
2
. Thus, the
two integrals in the right hand side of (91) have nite limits as t , which
shows that z L
2
. Part B of the Theorem can thus be proved by following
exactly the same steps as in Theorem 5.
Proof. Proof of Theorem 8
As in the proof of Theorem 7, when = 0, the tracking error dynamics for
the IARC control law (29) are given by (87). Thus the time derivative of the
non-negative function V
s
=
1
2
z
2
is
V
s
= z
_
k
s
z
T
_
k
s
z
2
+|z||
T
|
k
z
z
2
+ c
_
2 (92)
where k
z
= k
s
c
and c
=
1
4c
in which c
t
0
z
2
d
1
k
zl
_
V
s
(0) +
t
0
c
_
2
d
_
(93)
On the other hand, when the PE condition (56) is satised, from Theorem 9
of the chapter on System Identication and Parameter Estimation, the PAA
(55) guarantees the asymptotic convergence of the parameter estimates to
their true values and
L
p
2
. As is uniformly bounded, the integral in the
right hand side of (93) has nite limits as t , which shows that z L
2
.
The Theorem can thus be proved by following exactly the same steps as in
Theorem 5.
Proof. Proof of Theorem 9
Noting (61) and the condition ii of the robust performance condition (62), the
proof for part A of Theorem 9 is the same as in Theorem 5. The following is
to show asymptotic output tracking when = 0.
For simplicity, let u
s2
= k
s2
z where k
s2
represents the equivalent non-
linear feedback gain and denote the total nonlinear feedback gain w.r.t. z as
k
s
= k + k
s2
. Then, when = 0, the tracking error dynamics (58) can be
written in a simple form as
z + k
s
z =
d
c
T
(t) (94)
Without loss of generality, assume H
f
(s) =
1
f
s+1
where
f
represents the
lter time constant. Then
f
= H
f
[(x)] is represented in time-domain as
given by (88) and the relationship between
T
f
and
T
is described by (89).
Note that all signals are guaranteed to be uniformly bounded due to the
results in part A of the theorem. Thus, there exists a positive constant small
enough such that < min{
1
d
,
4ks
4
d
+k
2
s
}. With this , V
a
dened by
V
a
=
1
2
_
z +
f
T
f
_
2
+
_
z +
f
T
f
_
d
c
+
1
2
d
d
2
c
=
1
2
_
z +
f
T
f
d
c
_
T _
1
d
_
. .
_
z +
f
T
f
d
c
_
(95)
is non-negative as it is easy to verify that the under-braced matrix in (95)
is positive denite for <
1
d
. Furthermore, from (89) and (94), it can be
veried that the derivative of V
a
is
V
a
=
_
z +
f
T
f
__
k
s
z
d
c
T
f
+
f
T
f
_
+
_
z +
f
T
f
d
c
+
_
k
s
z
d
c
T
f
+
f
T
f
_
d
c
+
1
d
c
d
c
= k
s
z
2
f
_
T
f
_
2
(1 +
f
k
s
)z
T
f
+
f
_
z +
f
T
f
T
f
d
2
c
_
k
s
z + (
f
+ )
T
f
T
f
_
d
c
+
_
z +
f
T
f
d
c
+
d
c
_
1
d
c
z
_
(96)
Introduction to Adaptive Robust Control 39
Noting the projection type adaptation law (60) for
d
c
, |
d
c
|
d
|z| and the
last term in (96) is always less or equal to zero. Thus, using the completion
of squares technique like in (90), it is easy to verify that
V
a
k
s
z
2
f
_
T
f
_
2
+ (1 +
f
k
s
)|z||
T
f
| +
f
_
|z| +
f
|
T
f
|
_
|
T
f
d
2
c
+
_
k
s
|z| + (
f
+ ) |
T
f
| +
f
|
T
f
|
_
|
d
c
|
+
d
_
|z| +
f
|
T
f
|
_
|z|
k
z
z
2
k
T
f
_
2
k
c
d
2
c
+ c
T
f
_
2
(97)
where k
z
= k
s
c
c
z
2
k
2
s
4cc
d
, k
=
f
[1+
f
(ks+)]
2
4c
c
(
f
+)
2
4c
,
k
c
= c
c
c
c
f
, c
=
2
f
4cz
+
4
f
4c
+
2
2
f
4c
f
, and c
, c
z
, c
, c
, and c
f
are
any positive numbers small enough such that k
z
k
zl
> 0 and k
c
k
cl
> 0
for some constant k
zl
and k
cl
, which always exist as long as <
4ks
4
d
+k
2
s
. For
any given time t, integrating (97) leads to
k
zl
t
0
z
2
d + k
cl
t
0
d
2
c
d V
a
(0)
t
0
k
T
f
_
2
d +
t
0
c
T
f
_
2
d
(98)
As shown in Theorem 3 of the chapter on System Identication and Parameter
Estimation, the PAA (55) guarantees that
T
f
L
2
and
L
p
2
. Thus, the
two integrals in the right hand side of (98) have nite limits as t , which
shows that z L
2
and
d
c
L
2
. Asymptotic output tracking can thus be
proved by following exactly the same steps as in Theorem 7.
Proof. Proof of Theorem 10
With a small enough that 0 < <
1
d
, V
s
dened by
V
s
=
1
2
z
2
+ z
d
c
+
1
2
d
d
2
c
=
1
2
_
z
d
c
_
T
_
1
d
_
. .
_
z
d
c
_
(99)
is non-negative as it is easy to verify that the under-braced matrix in (99)
is positive denite. As in the proof of Theorem 9, when = 0, the tracking
error dynamics for the DIARC control law are given by (94). Thus the time
derivative of the non-negative function V
s
is
V
s
= z
_
k
s
z
d
c
_
+
_
k
s
z
d
c
_
d
c
+ z
d
c
+
1
d
c
d
c
= k
s
z
2
z
T
k
s
z
d
c
d
2
c
T
d
c
+ z
d
c
+
d
c
_
1
d
c
z
_
(100)
Noting the projection type adaptation law (60) for
d
c
, |
d
c
|
d
|z| and the
last term in (100) is always less or equal to zero. Thus, using the completion
of squares technique like in (97), it is easy to verify that
40 Bin Yao
V
s
k
s
z
2
+|z||
T
| + k
s
|z||
d
c
|
d
2
c
+ |
T
||
d
c
| +
d
|z|
2
k
z
z
2
k
c
d
2
c
+ c
T
_
2
(101)
where k
z
= k
s
c
z
2
k
2
s
4cc
d
, k
c
= c
c
c
f
, c
=
1
4cz
+
2
4c
f
, and
c
z
, c
c
, and c
f
are any positive numbers small enough such that k
z
k
zl
> 0
and k
c
k
cl
> 0 for some constant k
zl
and k
cl
, which always exist as long as
<
4ks
4
d
+k
2
s
. With such a , for any given time t, integrating (101) leads to
k
zl
t
0
z
2
d + k
cl
t
0
d
2
c
d V
s
(0) +
t
0
c
_
2
d
(102)
On the other hand, when the PE condition (56) is satised, from Theorem 9
of the chapter on System Identication and Parameter Estimation, the PAA
(55) guarantees the asymptotic convergence of the parameter estimates to
their true values and
L
p
2
. As is uniformly bounded, the integral in the
right hand side of (102) has nite limits as t , which shows that z L
2
and
d
c
L
2
. Asymptotic output tracking can thus be proved by following
exactly the same steps as in Theorem 7.