Anda di halaman 1dari 41

Lecture 7

Design Tools for Stabilization


Nonlinear Controllability & Stabilizability
Chapter 14: Backstepping - Lyapunov Redesign
Eugenio Schuster
schuster@lehigh.edu
Mechanical Engineering and Mechanics
Lehigh University
Lecture 7 p. 1/41
Nonlinear Controllability
Let us focus on driftless systems:
x = g
1
(x)u
1
+ + g
m
(x)u
m
Denition: The system is completely controllable if given any
T > 0 and any pair of points x
0
, x
1
R
n
there is an input
u = (u
1
, . . . , u
m
) which is piecewise analytic on [0, T] and
which steers the system from x(0) = x
0
to x(T) = x
1
.
Note: Recall that
[g
1
, g
2
] =
g
2
x
g
1

g
1
x
g
2
Lecture 7 p. 2/41
Nonlinear Controllability
Chows Theorem: The system is completely controllable if and
only if g
1
, . . . , g
m
plus all repeated Lie brackets span every
direction.
Note: If we have only one control u
1
, then [g
1
, g
1
] = 0. Thus,
we cannot generate any other direction. This is different
from the linear case. Here we need several controls u
i
.
Lecture 7 p. 3/41
Nonlinear Controllability
Control Lie Algebra:
C = {x|x = [x
j
, [x
j1
, [. . . [x
1
, x
0
]]]]}
x
i
g
1
, . . . , g
m
, i = 1, . . . j, j = 1, 2, . . .
L = span{C}
= span{g
1
, . . . , g
m
, [g
1
, g
2
], [g
1
, g3], . . . , [g
1
, [g
2
, g
3
], . . .}
Chows Theorem: The system is completely controllable if and
only if dimL(x) = n for all x.
Chows Theorem: The system is completely controllable if and
only if the involutive closure of {g
1
, . . . , g
m
} is of constant
rank n for all x.
Lecture 7 p. 4/41
Nonlinear Controllability
Consider now the systems:
x = f(x) + g
1
(x)u
1
+ + g
m
(x)u
m
Chows Theorem: The system is completely controllable if and
only if the involutive closure of {f, g
1
, . . . , g
m
} is of constant
rank n for all x.
For example, for m = 1, if the system is input-state
linearizable, then it is completely controllable
completely controllable input-state linearizable
Lecture 7 p. 5/41
Nonlinear Controllability
Example: Unicycle
x = l cos()u
1
y = l sin()u
1

= u
2
It is completely controllable
Linearization at x = y = = 0:
x = lu
1
y = 0

= u
2
It is NOT controllable! completely controllable due to
nonlinearity!
Lecture 7 p. 6/41
Nonlinear Stabilizability
Linear systems: Controllability Stabilizability
Nonlinear systems: Controllability Stabilizability
Brocketts Theorem: If the equilibrium x = 0 of the C
1
system
x = f(x, u) is locally asymptotically stabilizable by C
1
feedback of x, then () the image of the mapping f(x, u)
contains some neighborhood of x = 0, i.e., > 0 such that
|| x, u such that f(x, u) = .
Reminiscent of the Hautus-Popov-Belevitch Controllability
Test
rank[sI A, B] = n s
image[sI A, B] = R
n
s
Proof Sketch:
Lecture 7 p. 7/41
Nonlinear Stabilizability
Example: Unicycle
x = l cos()u
1
y = l sin()u
1

= u
2
on the set || < /2.
It is not stabilizable by C
1
feedback!
Lecture 7 p. 8/41
Nonlinear Stabilizability
Brocketts Theorem:
only necessary condition
restricted to C
1
feedback of x
There are two possibilities for systems that violate
Brocketts condition:
Non-smooth feedback (semi-obvious from
controllability)
Time-varying feedback (Claude Samson - NOLCOS
1998 Survey)
Lecture 7 p. 9/41
Nonlinear Stabilizability
Example: Unicycle
x = l cos()u
1
y = l sin()u
1

= u
2
Lecture 7 p. 10/41
Control Lyapunov Function (CLF)
We are interested in an extension of the Lyapunov function
concept, called a control Lyapunov function (CLF).
Let us consider the following system:
x = f(x, u), x R
n
, u R, f(0, 0) = 0,
Task: Find a feedback control law u = (x) such that the
equilibrium x = 0 of the closed-loop system
x = f(x, (x))
is globally asymptotically stable.
Lecture 7 p. 11/41
Control Lyapunov Function (CLF)
Task: Find a feedback control law u = (x) and a Lyapunov
function candidate V (x) such that

V =
V
x
(x)f(x, (x)) W(x), W(x) positive denite
A system for which a good choice of V (x) and W(x) exists
is said to possess a CLF.
Denition: A smooth positive denite and radially unbounded
function V : R
n
R
+
is called a control Lyapunov function
(CLF) if
inf
uR
{
V
x
(x)f(x, u)} < 0 x = 0
(or xu s.t.
V
x
(x)f(x, u) < 0)
Lecture 7 p. 12/41
Control Lyapunov Function (CLF)
Let us consider the following system afne in control:
x = f(x) + g(x)u, x R
n
, u R, f(0) = 0,
Denition: A smooth positive denite and radially unbounded
function V : R
n
R
+
is called a control Lyapunov function
(CLF) if x u such that
V
x
(x)f(x) +
V
x
(x)g(x)u < 0 x = 0
So, V (x) must satisfy (equivalent)
V
x
(x)g(x) = 0
V
x
(x)f(x) < 0 x = 0
The uncontrollable part is stable by itself.
Lecture 7 p. 13/41
Control Lyapunov Function (CLF)
Artstein (83): If a CLF exists, then (x) exists (but the proof
is not constructive)
Naive formula: (not continuous at
V
x
(x)g(x) = 0)
u = (x) =
V
x
(x)f(x) + W(x)
V
x
(x)g(x)
Sontags formula (89):
u =
s
(x) =
_
_
_

V
x
(x)f(x)+

(
V
x
(x)f(x)
)
2
+
(
V
x
(x)g(x)
)
4
V
x
(x)g(x)
V
x
(x)g(x) = 0
0
V
x
(x)g(x) = 0
This control gives

V =
_
_
V
x
(x)f(x)
_
2
+
_
V
x
(x)g(x)
_
4
< 0
Lecture 7 p. 14/41
Control Lyapunov Function (CLF)
Question: Is
s
(x) continuous on R
n
?
Lemma:
s
(x) is smooth on R
n
.
Lemma:
s
(x) is continuous at x = 0 if and only if the CLF
satises the small control property: , () > 0 such that if
|x| < , |u| < such that
V
x
(x)[f(x) + g(x)u] < 0
In other words, if there is a continuous controller stabilizing
x = 0 w.r.t. the given V , then
s
(x) is also continuous at
zero.
Sontags formula is continuous at the origin and smooth away from
the origin
Lecture 7 p. 15/41
Control Lyapunov Function (CLF)
Theorem: A system is stabilizable if and ony if there exists a
CLF
Proof:
There is a CLF system is stabilizable (proved)
System is stabilizable there is a CLF (trivial by
Converse Lyapunov theorem)
Lecture 7 p. 16/41
Control Lyapunov Function (CLF)
Examples:
Lecture 7 p. 17/41
Backstepping
Let us consider the following system afne in control:
x = f(x) + g(x)u, x R
n
, u R, f(0) = 0,
Assumption: There exist u = (x) and V (x) such that
V
x
[f(x) + g(x)(x)] W(x), W(x) positive denite
Lemma: Integrator Backstepping
x = f(x) + g(x)

= u
There is a whole integrator between u and . Under the
previous assumption, the system has a CLF
V
a
(x, ) = V (x) +
1
2
( (x))
2
, (a: augmented)
Lecture 7 p. 18/41
Backstepping
and the corresponding feedback that gives global
asymptotical stability is
u = c( (x)) +

x
(x)[f(x) + g(x)]
V
x
g(x), c > 0
(It would be enough to introduce this CLF in Sontags
formula)
Backstepping: We have a virtual control and we have to
go back through an integrator.
Lecture 7 p. 19/41
Backstepping
Proof:
Lecture 7 p. 20/41
Backstepping
Example: Avoid singularities in feedback linearization
x = x

= u
Lecture 7 p. 21/41
Backstepping
In the case of more than one integrator
x = f(x) + g(x)
1

1
=
2
.
.
.

n1
=
n

n
= u
we only have to apply the backstepping lemma n times.
Lecture 7 p. 22/41
Backstepping
Example: Khalil Examples 14.8 & 14.9
Lecture 7 p. 23/41
Backstepping
In the more general case
x = f(x) + g(x)

= f
a
(x, ) + g
a
(x, )u
If g
a
(x, ) = 0 over the domain of interest, the input
transformation
u =
1
g
a
(x, )
[v f
a
(x, )]
will reduce the system to
x = f(x) + g(x)

= v
and the backstepping lemma can be applied.
Lecture 7 p. 24/41
Backstepping
Strict Feedback Systems:
x
i
= x
i+1
+
i
( x
i
) i = 1, . . . , n 1
x
n
= u +
n
(x)
where x
i
= [x
1
, . . . , x
i
]
T
,
i
( x
i
) are smooth and
i
(0) = 0.
We have a local triangular structure:
x
1
= x
2
+
1
(x
1
)
x
2
= x
3
+
2
(x
1
, x
2
)
.
.
.
x
n
= u +
n
(x
1
, x
2
, . . . , x
n
)
Linear part: Brunovsky canonical form feedback
linearizable
Lecture 7 p. 25/41
Backstepping
The control law
z
i
= x
i

i1
( x
i1
)
0
= 0

i
( x
i
) = z
i1
c
i
z
i

i
+
i

j=1

i1
x
j
(x
j+1
+
j
), c
i
> 0
u =
n
guarantees global asymptotic stability of x = 0.
Lecture 7 p. 26/41
Backstepping
Proof:
Lecture 7 p. 27/41
Backstepping
The technique can be extended to more general Strict
Feedback Systems:
x
i
=
i
( x
i
)x
i+1
+
i
( x
i
) i = 1, . . . , n 1
x
n
=
n
(x)u +
n
(x)
where x
i
= [x
1
, . . . , x
i
]
T
( x
n
= x),
i
( x
i
) are smooth and

i
(0) = 0, and
i
( x
i
) = 0 for i = 1, . . . , n over the domain of
interest.
Lecture 7 p. 28/41
Backstepping
Robust Control: Consider the system
= f() + g() +
n
(, )

= f
a
(, ) + g
a
(, )u +

(, )
where the uncertainty satises inequalities

n
(, )
2
a
1

2
|

(, )| a
2

2
+ a
3
||
Let () be a stabilizing state feedback control law for the
-system that satises
|()| a
4

2
,
_
_
_
_

_
_
_
_
2
a
5
Lecture 7 p. 29/41
Backstepping
and V () be a Lyapunov function that satises
V

[f() + g()() +
n
(, )] b
2
2
Then, the state feedback control law
u =
1
g
a
_

[f() + g()]
V

g() f
a
k( )
_
with k sufciently large, stabilizes the origin of our system.
Moreover, if all assumptions hold globally and V () is
radially unbounded, the origin will be globally asymptotically
stable.
Lecture 7 p. 30/41
Backstepping
Proof:
Lecture 7 p. 31/41
Backstepping
Assumption: There exist = () with (0) = 0, and V (x)
such that
V

[f() + G()()] W(), W() positive denite


Lemma: Block Backstepping
= f() + G()

= f
a
(, ) + G
a
(, )u
where R
n
, R
m
, and u R
m
, in which m can be
greater than one.
Lecture 7 p. 32/41
Backstepping
Under the previous assumption, the system has a CLF
V
c
(, ) = V () +
1
2
[ ()]
T
[ ()],
and the corresponding feedback that gives asymptotical
stability for the equilibrium at the origin is
u = G
1
a
_

[f() + G()]
_
V

G()
_
T
f
a
k( ())
_
with k > 0.
Lecture 7 p. 33/41
Backstepping
Proof:
Lecture 7 p. 34/41
Backstepping
Example: Avoid cancellation
x = x x
3
+

= u
Lecture 7 p. 35/41
Lyapunov Redesign
Stabilization: Let us consider the following system:
x = f(t, x) + G(t, x) [u + (t, xu)] , x R
n
, u R
p
.
The uncertain term is an unknown function that lumps
together various uncertain terms due to model
simplication, parameter uncertainty, etc. The uncertain
term satises the matching condition, i.e., the uncertain
term enters the state equation at the same point as the
control input u.
Suppose we designed a control law u = (t, x) such that the
origin of the nominal closed loop system
x = f(t, x) + G(t, x)(t, x)
is uniformly asymptotically stable.
Lecture 7 p. 36/41
Lyapunov Redesign
Suppose further that we know a Lyapunov function V (t, x)
that satises

1
(x) V (t, x)
2
(x)
V
t
+
V
x
[f(t, x) + G(t, x)(t, x)]
3
(x)
for all t 0 and for all x D, where
1
,
2
and
3
are class
K functions.
We assume that the uncertainty satises the inequality
(t, x, (t, x) + v) (t, x) + k
0
v 0
GOAL: Design v such that overall control u = (t, x) + v
stabilizes the actual system in the presence of the
uncertainty.
Lecture 7 p. 37/41
Lyapunov Redesign
Solutions: Non-Continuous
(t, x, (t, x) + v)
2
(t, x) + k
0
v
2
v = (t, x)
w
w
2
(t, x, (t, x) + v)

(t, x) + k
0
v

v = (t, x)sgn(w)
where
w
T
=
V
x
G
These control laws are discontinuous functions of state x:
Avoid division by zero
Existence and uniqueness of solutions (not locally
Lipschitz)
Chattering
Lecture 7 p. 38/41
Lyapunov Redesign
Solutions: Continuous (just for one of the controllers)
v =
_
(t, x)
w
w
2
if (t, x)w
2

(t, x)
w

if (t, x)w
2
<
In general, the continuous Lyapunov redesign does
NOT stabilize the origin as its discontinuous counterpart
does
It guarantees boundedness of the solution
It stabilizes the origin if the uncertainty vanishes at the
origin and if

3
(x
2
)
2
(x), (t, x)
0
> 0, (t, x)
1
(x)
Lecture 7 p. 39/41
Lyapunov Redesign
Nonlinear Damping: Let us consider again the system:
x = f(t, x) + G(t, x) [u + (t, x, u)] , x R
n
, u R
p
.
but with (t, x, u) = (t, x)
0
(t, x, u), where
0
is a uniformly
bounded uncertain term.
Suppose we designed a control law u = (t, x) such that the
origin of the nominal closed loop system
x = f(t, x) + G(t, x)(t, x)
is uniformly asymptotically stable.
Lecture 7 p. 40/41
Lyapunov Redesign
Suppose further that we know a Lyapunov function V (t, x)
that satises

1
(x) V (t, x)
2
(x)
V
t
+
V
x
[f(t, x) + G(t, x)(t, x)]
3
(x)
for all t 0 and for all x D, where
1
,
2
and
3
are class
K

functions.
If upper bound of
0
is known stabilization as before.
If upper bound of
0
is NOT known nonlinear
damping guarantees boundedness if
v = kw(t, x)
2
2
, k > 0
Lecture 7 p. 41/41

Anda mungkin juga menyukai