Anda di halaman 1dari 47

http://www.elsolucionario.blogspot.

com

LIBROS UNIVERISTARIOS
Y SOLUCIONARIOS DE
MUCHOS DE ESTOS LIBROS
LOS SOLUCIONARIOS
CONTIENEN TODOS LOS
EJERCICIOS DEL LIBRO
RESUELTOS Y EXPLICADOS
DE FORMA CLARA
VISITANOS PARA
DESARGALOS GRATIS.

Solution Manual
for
Adaptive Control
Second Edition

Karl Johan strm


Bjrn Wittenmark

Preface

This Solution Manual contains solutions to selected problems in the second


edition of Adaptive Control published by Addison-Wesley 1995, ISBN 0-20155866-1.

PROBLEM SOLUTIONS

SOLUTIONS TO CHAPTER 1
1.5

Linearization of the valve shows that

v = 4v30 u
The loop transfer function is then
G0 ( s) GPI ( s) 4v30
where GPI is the transfer function of a PI controller i.e.


1
GPI ( s) = K 1 +
sTi
The characteristic equation for the closed loop system is
sTi( s + 1) 3 + K 4v30 ( sTi + 1) = 0
with K = 0.15 and Ti = 1 we get

( s + 1) s( s + 1) 2 + 0.6v30 = 0
( s + 1)( s3 + 2s2 + s + 0.6v30 ) = 0
The root locus of this equation with respect to vo is sketched in Fig. 1.
According to the Routh Hurwitz criterion the critical case is
r
3 10
3
0.6v0 = 2
v0 =
= 1.49
3
Since the plant G0 has unit static gain and the controller has integral
action the steady-state output is equal to v0 and the set point yr . The
closed-loop system is stable for yr = uc = 0.3 and 1.1 but unstable for
yr = uc = 5.1. Compare with Fig. 1.9.
1

Problem Solutions

1.5

Imag Axis

0.5

-0.5

-1

-1.5

-2
-2

-1.5

-1

-0.5

Figure 1.

1.6

0
Real Axis

0.5

1.5

Root locus in Problem 1.5.

Tune the controller using the Ziegler-Nichols closed-loop method. The


frequency u , where the process has 180 phase lag is first determined.
The controller parameters are then given by Table 8.2 on page 382 where
Ku =
we have

1
G0 ( i u )

es/q
1+s q

arg G ( i ) = arctan
=
q
q
G0 ( s) =

G0 ( i )

Ti

0.5

1.0
2.0
4.1

0.45
0.45
0.45

1
1
1

5.24
2.62
1.3

A simulation of the system obtained when the controller is tuned for the
smallest flow q = 0.5 is shown Fig. 2. The Ziegler-Nichols method is not
the best tuning method in this case. In the Fig. 3 we show results for

Solutions to Chapter 1

Process output
2
1
0
0

10

20

30

40

10

20

30

40

Control signal
2
1
0
0

Figure 2. Simulation in Problem 1.6. Process output and control signal


are shown for q = 0.5 ( full) , q = 1 ( dashed) , and q = 2 ( dotted) . The
controller is designed for q = 0.5.

controller designed for q = 1 and in Fig. 4when the controller is designed


for q = 2.
1.7

Introducing the feedback


u =
the system becomes

dx

=
0

dt
0

y1 = 1 1

3
0

0
x

0x

k y

2 2



0
1

u1
2 1 0 1 x +
k2
0

1
0

The transfer function from u1 to y1 is

s+1
0

2k2 s + 3
G ( s) = 1 1 0

k2
0

s2 + ( 4 k2 ) s + 3 + k2
( s + 1)( s + 3)( s + 1 + k2)

1
1

2k2

s + 1 + k2
0
0

Problem Solutions

Process output
2
1
0
0

10

20

30

40

10

20

30

40

Control signal
2
1
0
0

Figure 3. Simulation in Problem 1.6. Process output and control signal


are shown for q = 0.5 ( full) , q = 1 ( dashed) , and q = 2 ( dotted) . The
controller is designed for q = 1.

The static gain is


G ( 0) =

3 + k2
3( 1 + k2 )

Solutions to Chapter 1

Process output
2
1
0
0

10

20

30

40

10

20

30

40

Control signal
2
1
0
0

Figure 4. Simulation in Problem 1.6. Process output and control signal


are shown for q = 0.5 ( full) , q = 1 ( dashed) , and q = 2 ( dotted) . The
controller is designed for q = 2.

Problem Solutions

SOLUTIONS TO CHAPTER 2
2.1

The function V can be written as


n
X

V ( x1 xm ) =

xi x j ( aij + a ji ) 2 +

i, j = 1

n
X

b i xi + c

i=1

Taking derivative with respect to xi we get

V
x

n
X

( aij + a ji ) x j + bi

j=1

In vector notation this can be written as


gradx V ( x) = ( A + AT ) x + b
2.2

The model is
yt =

tT

b
0
ut1
+ et

b1

+ et = ut

The least squares estimate is given as the solution of the normal equation
P
P

P 2
ut
ut ut1 1
u t yt

1 T

P 2 P
= ( ) Y = P

utut1
ut1
ut1 yt

( a) Input is a unit step


ut =

t 0
otherwise

1
0

Evaluating the sums we get


N
X

yt

1
N
N
1

=
=


X
1
N

N 1 N 1

yt

y1

y1
1
N

N
X
2

yt

N
X

yt

N 1

yt

The estimation error is

e1
1
N

N
X
2

et

e1

Solutions to Chapter 2

Hence

1 1 1
N N 1 1 1

)( )
when N . Notice that the variance of the estimates do not go to
zero as N . Consider, however, the estimate of b + b .

1 1
1

E( b + b b b ) = 1 1
N

1
1 = N 1
N 1
E (

= ( ) 1 1 =
T

With a step input it is thus possible to determine the combination


b0 + b1 consistently. The individual values of b0 and b1 can, however,
not be determined consistently.

( b) Input u is white noise with Eu2 = 1 and u is independent of e.


Eu2t = 1
cov(

Eutut1 = 0

N
T

1
) = 1 E( ) =

1
1

1
N 1
In this case it is thus possible to determine both parameters consistently.
2.3

Data generating process:


y( t) = b0 u( t) + b1 u( t
where

e( t) = b1 u( t

Model:
or

1) + e( t) =

T ( t) = u( t) ,

and

( t) 0 + e( t)

0 = b0

1) + e( t)

( t)
y ( t) = bu
( t) + ( t) = T ( t) + ( t)
y( t) = bu

where

( t) = y( t)

y( t)

The least squares estimate is given by

T (

) =
0

Ed

e( 1)

..

Ed =

e( N )

Problem Solutions
N
1 T
1 X 2
=
u ( t)
N
N
1

Eu

N
N
1 T
1 X
1 X
Ed =
u( t) e( t) =
u( t) (b1 u( t
N
N
N
1

b E (u( t) u( t 1)) + E (u( t) e( t))


1

( a)

u( t) =
E ( u2 ) = 1
Hence

b =

t 1
t < 1

1
0

E ( u( t) u( t

1) + e( t))
N

1)) = 1

+ b1 = b0 + b1

Eu( t) e( t) = 0
N

i.e. b converges to the stationary gain

( b)
u( t) N ( 0, ) Eu2 = 2
Hence
2.6

The model is
yt =

tT

Eu( t) u( t

1) = 0

Eu( t) e( t) = 0

yt1 ut1
+ t =

+ et + cet1
b
|
{z
} | {z
{z
}
} |

tT

The least squares estimate is given by the solution to the normal equation
( 2.5) . The estimation error is

= ( P)
T

y2t1

PPy u
t 1 t 1

yt1 ut1

u2t1

P
P

yt1 et c
yt1 et1

P
P

ut1 et + c ut1 et1

Notice that T and are not independent. ut and et are independent, yt


depends on et, et1 , et2, . . . and yt depends on ut1 , ut2 , . . ..
Taking mean values we get
E (

) = E(

) 1 E ( T )

To evaluate this expression we calculate


P 2
P

2
yt1
yt1 ut1

Eyt

E
P 2
P
= N

yt1 ut1
ut1
0

Eu2t

Solutions to Chapter 2

and

P
P

yt1 et c
yt1 et1 c N Eyt1 et1

E
P
P
=

ut1 et + c ut1 et1


0
Eyt1 et1 = E ( ayt2 + but2 + et1) et1 = 2

Since
yt =
Ey2t =
Eu2t
we get

b
q+c
ut +
et
q+a
q+a

xb2
1 2ac + c 2 2

+
1 a2
1 a2

= 1

Ee2t

a) = b
E ( b b) = 0

E ( a

2 c( 1

+ (1

a )
2

2ac + c )
2

2.8

The model is

y( t) = a + bt + e( t)

1
=

=
t

= T + e( t)

a

According to Theorem 2.1 the solution is given by equation ( 2.6) , i.e.

= ( T ) 1 T Y
where

Hence

1 1 1
=

1 2 3

y( 1)

y( 2)

Y =

y( N )

N
1 N

N
X
X
X

1
t
y
(
t
)

t=1

t
=
1
t
=
1

N
N
N

X
X
X

t
t
ty( t)

t=1

t=1

t=1

(( 2N + 1) s0 3s1)

N ( N 1)

( ( N + 1) s0 + 2s1)
N ( N + 1)( N 1)

10

Problem Solutions

where we have made use of


N
X

t =

t=1

N
X

N ( N + 1)
2

t2 =

t=1

N ( N + 1)( N + 2)
6

and introduced
s0 =

N
X

y( t)

s1 =

t=1

N
X

ty( t)

t=1

The covariance of the estimate is given by


cov( ) = 2 ( T ) 1 =
12
N ( N + 1)( N

( N + 1)( 2N + 1)

1)
N
+1

N 2+ 1

Notice that the variance of b decreases as N 3 for large N but the


variance of a decreases as N 1 . The reason for this is that the regressor
associated with a is 1 but the regressor associated with b is t. Notice that
there are better numerical methods to solve for !
2.17

( a) The following derivation gives a formula for the asymptotic LS estimate


b = ( T ) 1 Y =

N
X

(k

k=1

1
N

N
X
k=1

E ( u( k

u( k

1)

1) )
2

1

1  1
N
E ( u( k

1)
N
X
k=1

N
1 X

(k

k=1

u( k

1) y( k)

1) y ( k)) ,

as

1) y ( k)

The equations for the closed loop system are

y( k))
y ( k) = y( k) + ay( k 1) = bu( k 1)

u( k ) = K ( u c ( k )

The signals u( k) and y( k) are stationary signals. This follows since


the controller gain is chosen so that the closed loop system is stable.
It then follows that E ( u( k 1) 2) = E ( u( k) 2) and E ( u( k 1) y ( k)) =
E ( bu( k 1) 2) = bE ( u( k) 2) exist and the asymptotic LS estimate
becomes b = b, i.e. we have an unbiased estimate.

Solutions to Chapter 2

11

Estimator
d
uc = 0

Figure 5.

1
q+a

The system redrawn.

( b) Similarly to ( a) , we get
N
1 X
u( k
N
k=1

1)

( u2 ( k

N
1  1 X
u( k
N

1))

1

k=1

1) y ( k)

( u( k 1) y ( k))0 ,

as

where ( ) 0 denote the stationary value of the argument. We have



u2 ( k 1) 0 = (( u( k)) 0) 2

( u( k

1) y ( k))

= ( u( k))0 b (( u( k) 0 + d0 )

( u( k))0 = Hud ( 1) d0 =

1 + aK +b K b d

and the asymptotic LS estimate becomes


b = ( u2 ( k


= b 1

1))

1

( u( k

1 + a + Kb
Kb


1) y ( k))0 = b 1 +

1+a
K

d0
( u( k)) 0

How do we interpret this result? The system may be redrawn as in


Figure 5. Since Uc = 0, we have that u = qK+ a y , and we can regard
K
q + a as the controller for the system in Figure 5. It is then obvious
that we have estimated the negative inverse of the static controller
gain.

( c) Introduction of high pass regressor filters as in Figure 6 eliminates or

at least reduces the influence from the disturbance d on the estimate


of b. One choice of regressor filter could be H f ( q1) = 1 q1, i.e. a
differentiator. Another possibility would be to introduce a constant in

12

Problem Solutions

Estimator

Hf

Hf

d
uc

b
q+a

Figure 6.

Introduction of regressor filters.

the regressor and then estimate both b and bd. The regression model
is in this case

y ( t) = u( t 1) 1

= ( t) T
bd

2.18 The equations for recursive least squares are

y( t) = T ( t 1) 0
( t) = ( t 1) + K ( t) ( t)
( t) = y( t) T ( t 1) ( t

1)
1)
P ( t 1) ( t 1)
=
+ ( t 1) P ( t 1) ( t 1)

P ( t) = I K ( t) ( t 1) P ( t 1) /
Since the quantity P ( t) ( t 1) appears in many places it is convenient
K ( t) = P ( t) ( t
T

to introduce it as an auxiliary variable w = P . The following computer


code is then obtained:

Input
u,y: real
Parameter lambda: real
State
phi, theta: vector
P: symmetric matrix
Local variables w: vector, den : real
"Compute residual
e=y-phi^T*theta
"update estimate
w=P*phi
den=w^T*phi+lambda

Solutions to Chapter 2

theta=theta+w*e/den
"Update covariance matrix
P=(P-w*w^T/den)/lambda
"Update regression vectors
phi=shift(phi)
phi(1)=-y
phi(n+1)=u

13

14

Problem Solutions

SOLUTIONS TO CHAPTER 3
3.1

Given the process


H ( z) =

B ( z)
z + 1.2
= 2
A( z)
z
z + 0.25

Design specification: The closed system should have a pole that correspond to following characteristic polynomial in continuous time
s2 + 2s + 1 = ( s + 1) 2
This corresponds to

Am ( z) = z2 + am1 z + am2

with

am1 =
am2 =

( e

e2

+ e1 ) =

2e

( a) Determine an indirect STR of minimal order. The controller should


have an integrator and the stationary gain should be 1.
Solution:
Choose Bm such that
B m ( 1)
= 1
Am ( 1)
The integrator condition gives
R = R ( z

1)

We get the following conditions

( 1)

B T = B m Ao

( 2)

AR + B S = Am Ao

. This makes ( 1) B T =
As B is unstable must Bm = B Bm

B Bm Ao T = Bm Ao . Choose Bm such that


( 1)
B ( 1) B m
( 1) = A( 1)
= 1 Bm
A( 1)
B ( 1)

The simplest way is to choose

= b =
Bm
m
Further we have

A( 1)
0.25
=
B ( 1)
2.2

( z2 + a1 z + a2)( z 1)( z + r) + ( b0z + b1)( s0z2 + s1 z + s2)


= ( z2 + am1 z + am2 )( z2 + ao1 z + ao2 )

Solutions to Chapter 3

15

with a1 = 1, a2 = 0.25 and ao1 and ao2 chosen so that Ao is stable.


Equating coefficients give

r 1 + a1 + b0 s0 = ao1 + am1

r + a ( r 1) + a + b s + b s = a + a a + a
1
2
0 1
1 0
o2
m1 o1
m2

ar + a2 ( r 1) + b0 s2 + b1 s1 = am1 ao2 + am2 ao1

a2 r + b1 s2 = am2 ao2
or

1
b0 0 0 r
ao1 + am1 + 1 a1

a
s
1
b
b
0
a
a
a
a
a
a
+
+
+

1
1
0
0
a2
m1
o1
m2
1
2

a
a
0
b
b
s
a
a
a
a
a
+
+

2
1
1
0
1
m1
o2
m2
o1
2

a2
0 0 b1
s2
am2 ao2

Now choose to estimate

= b1

b1

a1

T
a2

by equation 3.22 in the textbook.

( b) As H ( z) is not minimum phase we cancel B = B between R and


This is difficult, see page 118 in the textbook. An indirect STR is
S.
given by Eq. 3.24.
+ S y
Ao Am y = Ru

= B R, S = B S, T = B Ao . Furthermore we have
with R
m

uc = B Tuc = Tu
c
Ao Am y m = Ao B m u c = Ao B B m

y = R

1
1
u + S
y
Ao Am
Ao Am
| {z }
| {z }
= uf

ym = T

= y

= yf

1
uc
Ao Am
| {z }
= uc f

f + S y f
= Ru

T u

cf

S and T with a recursive method in the above


Now estimate R,
equation. Then cancel B and calculate the control signal.

( c) Take a = 0 in Example 5.7, page 206 in the textbook. This gives

with e = y

= y

dt0

dt

ds0
dt
m uc .

= ye

= uc e

16

3.3

Problem Solutions

The process has the transfer function


G ( s) =

b
q

s+a s+p

where p and q are known. The desired closed loop system has the transfer
function
Gm ( s) =

s2

2
+ 2 s + 2

Since a discrete time controller is used the transfer functions are sampled.
We get
b0 z + b1
H ( z) = 2
z + a1 z + a2
H m ( z) =

b m0 z + b m1
z2 + am1 z + am2

The fact that p and q are known implies that one of the poles of H is
known. This information will be disregarded. In the following we will
assume
1
G ( s) =
s( s + 1)
With h = 0.2 this gives
H ( z) =

0.0187( z + 0.936)
( z 1)( z 0.819)

Furthermore we will assume that = 2 and = 0.7.


Indirect STR:
The parameters of a general pulse transfer function of second order is
estimated by recursive least squares ( See page 51) . We have

T
= b o b1 a1 a2

( t) = u( t 1) u( t 2)

y( t 1) y( t 2)

The controller is calculated by solving the Diophantine equation. We look


at two cases
1.B canceled:

( z2 + a1 z + a2 ) 1 + b0( s0z + s1) = z2 + am1 z + am2


z1 : a1 + b0 s0 = am1
z0 : a2 + b0 s1 = am2

am1 a1
b0
am2 a2
s1 =
b0
s0 =

Solutions to Chapter 3

17

The controller is thus given by

R ( z) = z + b 1 b 0
S ( z) = s0 z + s1

T ( z) = t0 z

1 + am1 + am2
b0

t0 =

where

2. B not canceled:
The design equation becomes

( z2 + a1 z + a2 )( z + r1 ) + ( b0z + b1)( s0z + s1 ) = ( z2 + am1 z + am2)( z + ao1)


Identification of coefficients of equal powers of z gives
z2 : a1 + r1 + b0 s0 = am1 + ao1
z1 : a2 + a1 r1 + b1 s0 + b0 s1 = am1 ao1 + am2
z0 : a2 r1 + b1 s1 = am2 ao1
The solution to these linear equations is

b21 n1 b0 b1 n2 + ao1 am2 b20


b20 a2 a1 b0 b1 + b21
n1 r1
s0 =
b0

r1 =

s1 =
where

b 0 n2

b n r (a b b )
1 1

1 0

b20

n1 = a1

a a
m1

o1

n2 = am1 ao1 + am2

The solution exists if the denominator of r1 is different from zero, which


means that there is no pole-zero cancellation. It is helpful to have access
to computer algebra for this problems e.g. Macsyma, Maple or Matematica! Figure 7 shows a simulation of the controller obtained when the
polynomial B is canceled. Notice the ringing of the control signal which
is typical for cancellation of a poorly damped zero. In this case we have
z = 0.936. In Figure 8 we show a simulation of the controller with no
cancellation. This is clearly the way to solve the problem.
Direct STR:
To obtain a direct self-tuning regulator we start with the design equation

AR + B S = Am Ao B +
Hence

B + Am Ao y = ARy + B Sy = B Ru + B Sy




B
B
u +S
y
y = R
Ao Am
Ao Am
|
{z
}
|
{z
}
uf

yf

18

Problem Solutions

2 uc y
1
0
1
2
0

10

20

30

40

50

10

20

30

40

50

u
10
0
10
0

Figure 7. Simulation in Problem 3.3. Process output and control signal


are shown for the indirect self-tuning regulator when the process zero is
canceled.

From this model R and S can be estimated. The polynomial T is then


given by
t o Ao B m
T =
B
where to is chosen to give the correct steady state gain. Again we separate
two cases.
1. Cancel B:
If the polynomial B is canceled we have B + = z + b1 b0, B = b0 . From
the analysis of the indirect STR we know that no observer is needed in
this case and that the controller has the structure deg R = deg S = 1.
Hence




bo
bo
u( t) + S
y( t)
y( t) = R
Am
Am
Since b o is not known we include it in the polynomial R and S and
estimate it. The polynomial R then is not monic. We have




1
1
y( t) = ( r0 q + r1 )
u( t) + ( s0 q + s1 )
y( t)
Am
Am

To obtain a direct STR we thus estimate

= r0 r1 s0

T
s1

Solutions to Chapter 3

19

2 uc y
1
0
1
2
0

10

20

30

40

50

10

20

30

40

50

u
10
0
10
0

Figure 8. Simulation in Problem 3.3. Process output and control signal


are shown for the indirect self-tuning regulator when the process zero is
not canceled.

by RLS. The case r0 = 0 must be taken care of separately. Furthermore


T has the form T ( q) = t0 q where
BT
B to q
to q
= +
=
AR + B S
Am
B b A
| {z }o m
B

To get unit steady state gain choose


t o = Am ( 1)
A simulation of the system is shown in Fig. 9. We see the typical ringing
phenomena obtained with a controller that cancels a poorly damped zero.
To avoid this we will develop an algorithm where the process zero is not
canceled.
2. No cancellation of process zero:
We then have B + = 1 and B = b0 q + b1. From the analysis of the indirect
STR we know that a first order observer is required, i.e. A0 = q + ao1. We
have as before




B
B
u +S
y
( )
y = R
Ao Am
Ao Am
|
{z
}
|
{z
}
uf

yf

20

Problem Solutions

2 uc y
1
0
1
2
0

10

20

30

40

50

10

20

30

40

50

u
10
0
10
0

Figure 9. Simulation in Problem 3.3. Process output and control signal


are shown for the direct self-tuning regulator when the process zero is
canceled.

Since B is not known we can, however, not calculate u f and y f . One


possibility is to rewrite ( *) as




1
1

B}
u + |S{z
B}
y
y = |R{z
Ao Am
Ao Am

and to estimate R and S as second order polynomials and to cancel


the common factor B from the estimated polynomials. This is difficult
because there will not be an exact cancellation. Another possibility is to
use some estimate of B . A third possibility is to try to estimate B R
and B S as a bilinear problem. In Fig. 811 we show simulation when
the model ( *) is used with
B = 1

B = q
q + 0.4
1.4
q
0.4
B =
0.6
B =

Solutions to Chapter 3

21

2 uc y
1
0
1
2
0

10

20

30

40

50

10

20

30

40

50

10 u

10
0

Figure 10. Simulation in Problem 3.3. Process output and control signal
are shown for the direct self-tuning regulator when the process zero is not
canceled and when B = 1.

3.4

The process has the transfer function


G ( s) =

b
s( s + 1)

with proportional feedback


u = k( u c

y)

we get the closed loop transfer function


Gcl ( s) =

s2

kb
+ s + kb

The gain k = 1 b gives the desired result. Idea for STR: Estimate b and
To estimate b introduce
use k = 1 b.

s( s + 1) y = bu
1
s( s + 1)
y = b
u
( s + a) 2
( s + a) 2

| {z }
yf

| {z }

22

Problem Solutions

2 uc y
1
0
1
2
0

10

20

30

40

50

10

20

30

40

50

u
10
0
10
0

Figure 11. Simulation in Problem 3.3. Process output and control signal
are shown for the direct self-tuning regulator when the process zero is not
canceled and when B = q.

The equations on page 56 gives

db
= P e = P ( y f b )
dt
dP
= P P T P = P
dt

With b ( 0) = 1, P ( 0) = 100, =
in Fig. 14.

P
2

0.1, and a = 1 we get the result shown

Solutions to Chapter 3

2 uc y
1
0
1
2
0

10

20

30

40

50

10

20

30

40

50

u
10
0
10
0

Figure 12. Simulation in Problem 3.3. Process output and control signal
are shown for the direct self-tuning regulator when the process zero is not
canceled and when B = ( q + 0.4) /1.4.

23

24

Problem Solutions

2 uc y
1
0
1
2
0

10

20

30

40

50

10

20

30

40

50

u
10
0
10
0

Figure 13. Simulation in Problem 3.3. Process output and control signal
are shown for the direct self-tuning regulator when the process zero is not
canceled and when B = ( q 0.4) /0.6.

Solutions to Chapter 3

2 uc y
0
2
0

10

15

20

10

15

20

10

15

20

10 u
5
0
0
1 b
0.6
0.2
0

Figure 14. Simulation in Problem 3.4. Process output, control signal


and estimated parameter b are shown for the indirect continuous-time
self-tuning regulator.

25

26

Problem Solutions

SOLUTIONS TO CHAPTER 4
4.1

The estimate b may be small because of a poor estimate. One possibility


is to use a projection algorithm where the estimate is restricted to be in
a given range, b o b b1 . This requires prior information about b:s
values. Another possibility is to replace
1
b

by

b
b 2 + P

where P is the variance of the estimate. Compare with the discussion of


cautious control on pages 356358.
4.10 Using ( 4.21) the output can be written as
R
S
R
ut + yt + 1 et + do

C
C
C
r
u
f
= o t+ t

yt + d =

( )

Consider minimization of

( +)

J = y2t + d + u2t
Introduce the expression ( *)
J = ( rout + f t) 2 + u2t

= ( r2o + ) u2t + 2ro ut f t + f t2




2rout f t
2
2
= ( ro + ) ut + 2
+ f t2
ro +

2
ro f t
r2o f t2
2
= ( ro + ) ut + 2
+ f t2
ro +
r2o +

Hence

2
r2o +

ft +
ut
f2
+ 2
ro
ro + t


 2
1

f
r
ut
f2
= 2
+ 2
t +
o +
ro +
ro
ro + t

2
1

yt + d +
ut
f2
= 2
+ 2
ro +
ro
ro + t

1
J = 2
ro +

Since r2o + is a constant we find that minimizing ( + ) is the same as to


minimize



ut = f t + ro +
ut
J1 = y t + d +
ro
ro

Solutions to Chapter 5

27

SOLUTIONS TO CHAPTER 5
5.1

The plant is
1
B
=
s( s + a)
A

G ( s) =
The desired response is
Gm ( s) =

2
Bm
=
s2 + 2 s + 2
Am

( a) Gradient method or MIT rule. Use formulas similar to ( 5.9) . In this


case B + = 1 and Ao is of first order. The regulator structure is
R = s + r1

S = s0 s + s1

This gives the updating rules

T = t0 Ao


1
u
Ao Am


ds0
p
y
= e
dt
Ao Am


ds1
1
y
= e
dt
Ao Am


dt0
1
uc
= e
dt
Ao Am

dr1
= e
dt

( b) Stability theory approach 1. First derive the error equation. If all


process zeros are cancelled we have
Ao Am y = AR 1 y + b o Sy = B R 1 u + b o Sy

= b o( Ru + Sy)
Further
Ao Am ym = Ao Bm uc = b o Tuc
Hence

Ao Am e = Ao Am ( y

e =

) = bo( Ru + Sy

b
( Ru + Sy
Ao Am

Tu )

Tu )
c

Since 1 Ao Am is not SPR we introduce a polynomial D such that


D Ao Am is SPR. We then get

e =
where
uf =

bD
Ru f + Sy f
Ao Am

1
u
D

yf =

1
y
D

Tu


cf

uc f =

1
uc
D

28

Problem Solutions

( c) Stability theory approach 2. In this case we assume that all states


measurable. Process model:


a 0

x =
x +

1 0
0
|
{z
}
| {z }

Ap

Bp

y = 0 1 x
| {z }
C

Control law
u = Lr u c

Lx = u x x
3

1 1

2 2

The closed loop system is


x = ( Ap

B L) x + B L u
p

= Ax + Buc

y = Cx
where
A( ) = Ap

a
Bp L =


3
B ( ) = Bp L r =

The desired response is given by


x m = Am xm + Bm uc
where

We have

2
Am =

A
(B B

)T

= 3

Introduce the state error


e = x
e = x

Bm =

2 a 1 0
T

m)
2

(A

we get

= Am

A x B
e + ( A A ) x + ( B B )u
m

= Ax + Buc

m m

m uc

( )

The error goes to zero if Am is stable and


A( )

= 0

( )

Solutions to Chapter 5

B ( )

( +)

= 0

29

It is thus necessary that A( ) and B ( ) are such that there is a


for which ( *) and ( + ) hold. Introduce the Lyapunov function

A ) Q (A A )
+ tr( B B ) Q ( B B )

V = eT Pe + tr( A

Notice that

tr( A + B ) = trA + trB


xT Ax = tr( xxT A) = tr( AxxT )
tr( AB ) = tr( B A)

we get


dV
T + PeeT + A T Qa ( A
= tr P ee
dt
+ (A

Am ) Qa A + B Qb ( B
T

But from ( *)

) + (B

Bm ) Qb B

( +)

A ) x + ( B B )u ) e
e + ( A A ) x + ( B B )u )

P eeT = P ( Am e + ( A
PeeT = Pe ( Am

Introducing this into ( + ) and collecting terms proportional to ( A


Am ) T we find that they are

2tr( A Am ) T Qa A + PexT

Similarly we find that terms proportional to ( B


2tr( B
Hence

Qb B +

PeuTc

dV
= eT PAm e + eT ATm Pe
dt

A )
+ 2tr( B B )
+ 2tr( A

) are

Qa A + PexT

Qb B + PeuTc

Hence if the symmetric matrix P is chosen so that


ATm P + PAm =

where Q is positive definite ( can always be done if Am is stable!) and


parameters are updated so that
(

( A Am ) T Qa A + PexT = 0

( +)
( B Bm ) T Qb B + PeuT = 0

30

Problem Solutions

we get
dV
=
dt

Qe

The equations for updating the parameters derived above can now be
used. This gives

2 a 1 0
1
2

= 0

Q
Pex
+

a
2 2
0
0
0

3
3 2 0

Qb
= 0

+ Peuc
0

Hence with Qa = I and Qb = 1


d 1
= p11 e1 x1 + p12 e2 x1 = ( p11 e1 + p12 e2 ) x1
dt
d 2
= p11 e1 x2 + p12 e2 x2 = ( p11 e1 + p12 e2 ) x2
dt
d 3
= ( p11 e1 + p12 e2 ) uc
dt
where e1 = x1
P such that

m1

and e2 = x2

m2 .

ATm P + PAm =

It now remains to determine

Choosing = 0.707, = 2 and

41.2548 11.3137

Q =

11.3137 16.0000
we get

4
P =

16

The parameter update laws become


d 1
= ( 4e1 + 2e2) x1
dt
d 2
= ( 4e1 + 2e2) x2
dt
d 3
= ( 4e1 + 2e2 ) uc
dt

A simulation of the system is given in Fig. 15.


5.2

The block diagram of the system is shown in Fig. 16. The PI version of
the SPR rule is
d
d
( )
= 1 ( uc e) 2 uc e
dt
dt

Solutions to Chapter 5

2 States x1 xm1 x2 xm2


1
0
1
2
0

10

20

30

40

20

30

40

Estimated parameters
1
0.5
0
0

10

Figure 15. Simulation in Problem 5.1. Top: Process and model


states, x1 ( full) , xm1 ( dashed) , x2 ( dotted) , and xm2 ( dash-dotted) .
Bottom: Controller parameters 3 ( full) , 1 ( dashed) , and 2 ( dotted) .

ym

uc

y
s

Figure 16.

Block diagram in Problem 5.2.

To derive the error equation we notice that

dym
= 0uc
dt
dy
= uc
dt

31

32

Problem Solutions

Hence

de
= (
dt

we get

)u

d2 e
d
uc + (
=
dt2
dt

) dudt

Inserting the parameter update law ( *) into this we get




duc
de
duc
d2 e
uc 2 u2c e + ( 0 )
e + uc
= 1
dt2
dt
dt
dt

Hence

d2 e
de
+ 1 u2c
+
2
dt
dt



duc
1 uc
+ 2 u2c e = (
dt

) dudt

Assuming that uc is constant we get the following error equation


d2 e
de
+ 1 u2c
+ 2 u2c e = 0
2
dt
dt
Assuming that we want this to be a second order system with and
we get

1 u2c = 2
1 = 2 u2c
2
2
2 uc =
2 = 2 u2c

This gives an indication of how the parameters 1 and 2 should be


selected. The analysis was based on the assumption that uc was constant.
To get some insight into what happens when uc changes we will give
a simulation where uc is a triangular wave with varying period. The
adaptation gains are chosen for different and . Figure 17 shows what
happens when the period of the square wave is 20 and = 0.5, 1 and 2.
Corresponding to the periods 12, 6 and 3. Figure 18 show what happen
when uc is changed more rapidly.
5.6

The transfer function is


b0 s2 + b1 s + b2
s2 + a1 s + a2

G ( s) =

The transfer function has no poles and zeros in the right half plane if
a1 0, a2 0, b0 0, b1 0, and b2 0. Consider
G ( i ) =

B ( i ) A( i )

A( i ) A( i )

The condition Re G ( i ) 0 is equivalent to Re ( B ( i ) A( i ) ) 0. But



g ( ) = Re ( b0 2 + i b1 + b2 )( 2 i a1 + a2 )

= b0 + ( a1 b1
4

b a b )
0 2

+ a2 b2

Solutions to Chapter 5

Process and model output

Process and model output

10

10

0
0
1.5

10

15

0
0

20

Estimated parameter

10

15

20

Estimated parameter

1.5

0.5
0
0

33

0.5

10

15

0
0

20

10

15

20

Figure 17. Simulation in Problem 5.2 for a triangular wave of period 20.
Left top: Process and model outputs, Left bottom: Estimated parameter
when = 0.5 ( full) , 1 ( dashed) , and 2 ( dotted) for = 0.7. Right top:
Process and model outputs, Right bottom: Estimated parameter when
= 0.4 ( full) , 0.7 ( dashed) , and 1.0 ( dotted) for = 1.

Completing the squares the function can be written as



2
a1 b1 b0 a2 b2
( a1 b1
g ( ) = b0 2 +
+ a2 b2
2b0

b a b )
0 2

4b0

When b0 = 0 the condition for g to be positive is that


a1 b1

b a b
0 2

( i)

If b0 > 0 the function g ( ) is nonnegative for all if either ( i) holds or


if
a1 b1 b0 a2 b2 < 0

and
a2 b2 >

( a1 b1

0 2

4b0

Example 1. Consider

b a b )

G ( s) =

s2 + 6s + 8
s2 + 4s + 3
3 8 = 13 > 0. Hence the transfer

We have a1 b1 b0 a2 b2 = 24
function G ( s) is SPR. Example 2.

G ( s) =

3s2 + s + 1
s2 + 3s + 4

34

Problem Solutions

Process and model output

Process and model output

10

10

0
0
1.5

10

15

0
0

20

Estimated parameter

1.5

10

15

20

0.5
0
0

Estimated parameter

0.5

10

15

0
0

20

10

15

20

Figure 18. Simulation in Problem 5.2 for a triangular wave of period 5.


Left top: Process and model outputs, Left bottom: Estimated parameter
when = 0.5 ( full) , 1 ( dashed) , and 2 ( dotted) for = 0.7. Right top:
Process and model outputs, Right bottom: Estimated parameter when
= 0.4 ( full) , 0.7 ( dashed) , and 1.0 ( dotted) for = 1.

we have a1 b1

a b b
2 0

= 3

12 1 = 10. Furthermore
a2 b2 = 4

( a1b1

a b b )
2 0

4b0

100
12

Hence the transfer function G ( s) is neither PR nor SPR.


5.7

Consider the system


dx
= Ax + B1 u
dt
y = C1 x
where

B1 =

Let Q be positive and let P be the solution of the Lyapunov equation


AT P + PA =

( )

Solutions to Chapter 5

Define C1 as

C1 = B T P = p11

p12

35

p1n

...

According to the Kalman-Yacobuvich Lemma the transfer function


G1 ( s) = C1 ( sI

A) B
1

is then positive definite. This transfer function can, however, be written


as
p11 sn1 + p12 sn2 + . . . + p1n
G ( s) =
sn + a1 sn1 + . . . + an
Since there are good numerical algorithms for solving the Lyapunov equation we can use this result to construct transfer functions that are SPR.
The method is straightforward.
1.
2.
3.

Choose A stable.
Solve ( *) for given Q positive.
Choose B as
B ( s) = p11 sn1 + p12 sn2 + . . . + p1n

5.11 Let us first solve the underlying design problem for systems with known
parameters. This can be done using pole placement. Let the plant dynamics be
B
u
y =
A
and let the controller be
Ru = Tuc

sy

The basic design equation is then


AR + B S = Am Ao
In this case we have

( )

A = ( s + a)( s + p)
B = bq
Am = s2 + 2 s + 2

We need an observer of at least first order. The design equation ( *) then


becomes

( s + a)( s + p)( s + r1 ) + bq( s0s + s1) = ( s2 + 2 s + 2 )( s + a0)

( +)

where Ao = s + ao is the observer polynomial. The controller is thus of


the form
dy
du
+ r1 u = t0 uc s0 y s1
dt
dt

36

Problem Solutions

It has four parameters that have to be updated r1 , t0 , s0 , and s1 . If no prior


knowledge is available we thus have to update these four parameters.
When parameter p is known it follows from the design equation ( + ) that
there is a relation between the parameters and it then suffices to estimate
three parameters. This is particularly easy to see when the observer
polynomial is chosen as Ao ( s) = s + p. Putting s = p in ( + ) gives

s p + s
0

= 0

Hence

( )

s1 = ps0

In this particular case we can thus update t0 , s0 , and r1 and compute


s1 from ( **) . Notice, however, that the knowledge of q is of no value
since q always appear in combination with the unknown parameter b.
The equations for updating the parameters are derived in the usual way.
With a0 = p equation ( + ) simplifies to

( s + a)( s + r1 ) + bqs0 = s2 + 2 s + 2
Introducing
we get

A ( s) = s + a

S ( s) = s0

T = t0

BT
uc
A R + B S
e
B
B
uc
uc
=

t0
A R + BS
Am
y =




e = B T B u = B y  B
 s ( A R + B S)
A R + B S
A

 e = A B T u = A y  A
 r ( A R + B S)
A R + B S
A

A B
B
u
u =
=
A A
A ( s + p)
0

The MIT rule then gives


1
u e
( s + p) Am


1
ds0
y e
=
dt
Am


1
dt0
uc e
=
dt
Am

dr1
=
dt

A simulation of the controller is given in Fig. 19.

Solutions to Chapter 5

2 Model and process output


1
0
1
2
0

20

40

60

80

100

40

60

80

100

Estimated parameters
2
1.5
1
0.5
0

20

Figure 19. Simulation in Problem 5.11. Top: Process ( full) and model
( dashed) output. Bottom: Estimated parameters r1 ( full) , s0 ( dashed) , and
t0 ( dotted) .

5.12 The closed loop transfer function is


Gcl ( s) =

s2

kb
+ s + kb

This is compatible with the desired dynamics. The error is


e = y
Hence

e =
k p

kb
uc
p2 + p + kb

1
( u y)
+p+1

b
b2 k
u
u
c
2 + p + kb
( p2 + p + kb) 2 c
b
( uc y)
= 2
p + p + kb

bp

p =

d
dt

The following adjustment rule is obtained from the MIT rule




dk
e
1
e = b
(
u
y
)
e
=
c
| {z } p2 + p + 1
dt
k



37

38

Problem Solutions

2 ym y, gamma = 0.05

2 k, gamma = 0.05

2
0

20

40

60

0
0

20

2 ym y, gamma = 0.3

2 k, gamma = 0.3

2
0

20

40

60

0
0

20

2 ym y, gamma = 1

2 k, gamma = 1

2
0

20

40

60

0
0

20

40

60

40

60

40

60

Figure 20. Simulation in Problem 5.12. Left: Process ( full) and model
( dashed) output. Right: Estimated parameter k for different values of .

A simulation of the system is given Fig. 20. This shows the behavior of
the system when uc is a square wave.

Solutions to Chapter 6

39

SOLUTIONS TO CHAPTER 6
6.1

The process is given by


G ( s) =

b
s( s + a)

and the regressor filter should be


G f ( s) =

1
1
1
=
= 2
A f ( s)
Am ( s)
s + 2 s + 2

The controller is given by


U ( s) =

s0 s + s1
t0 ( s + ao )
Y ( s) +
uc ( s)
s + r1
s + r1

For the estimation of process parameters we use a continuous-time RLS


algorithm. The process is of second order and the controller is of first
order. The regressor filter is of second order and both inputs and outputs
should be filtered. Hence we need seven states in . The process parameters are contained in , and the controller parameters in . The relation
between these are given by


2 + ao a
r1

ar
(
a
2

)
b
+
s

o
1

= ( )

=
=

s1
ao b

2
t0
b

/
/
/

To find A( ) , B ( ) , C ( ) and D ( ) we start by finding realizations for


y, y f , u f and the controller. We get


y a 0 y b
d


+

u
dt y
y
1 0
0


y f 2
2 y f 1
d


+

y
dt y f
1
0
yf
0

and


u f 2
d

dt u f
1

u f
1


+

u
0
uf
0
2

and the control law can be rewritten as


u =

s y + t u + sp ++ rr s
1

1 0

0 c

y+

ao t0 t0 r1
uc
p + r1

We need one state for the controller and it can be realized as


x =

r x + ( s
1

+ r1 s0 ) y + ( ao t0

u = x

s y+t u
0

r t )u
1 0

40

Problem Solutions

Combining the states results in



y a
bs0
0

y
1
0
0

0
1
2
y f

0
0
1
=
yf

dt

u
s0
0
0

0
0
0
u

0
x
s1 + r1 s0
0

y
bt
0

y
0

yf
uc
0

u f

u
0

ao t0 r1 t0
x

0
0

0
0

r1

which defines the relation


d
= A( ) + B ( ) uc
dt
Now we need to express e and in the states so that we find the C and
D matrices. The estimator tries to find the parameters in
yf =
which is rewritten as

p2 y f = py f

Clearly

0 0
=

0 0

and
e = p2 y f

b
uf
p( p + a)

a

uf

= T 0
b

y f

1 0 0 0 0

yf

0 0 0 1 0

u f

f
x

2 y
f

y f + y + a y f

bu

Solutions to Chapter 6

If we use the relations a =


written as
e =

2 y
f

= 0 1

+ ao

+ 2 + ao and b = 2 t0 then e can be

y f + y + ( r1 + 2 + ao ) y f

41

2
t0

y f

yf
0

u f

f
x

uf

Combining the expressions for and e gives


0 1
e

=
0 0

0 0

+ ao
1
0

t0

y f
0

yf
0

u f
0

= C ( )

i.e. D ( ) = 0. As given in the problem description, the estimator is


defined by
d
= P e
dt
dP
= P P T P
dt

where P is a 2 2 matrix and e and are given above.


6.3

The averaged equations for the parameter estimates are given by ( 6.54)
on page 303. In this particular case we have
ab2
( s + a)( s + b) 2
a
Gm ( s) =
s+a
G ( s) =

42

Problem Solutions

To use the averaged equations we need


 



b
avg ( Gm uc ) ( Guc ) = avg v
v
( p + b) 2


1 2
b2
a2 u20 b2
v 2
cos
2

2 cos2 1
=
2
b + 2
2( a2 + 2 )( b2 + 2 )


a2 b2u20
2b2
a2 b2 u20 ( b2 2 )
1
=
=
2
2
2
2
2
2
2( a + )( b + ) + b
2( a2 + 2 )( b2 + 2 ) 2
=

where we have introduced


uc = u0 sin t
a
uc
v =
p+a

= atan

Similarly we have

j j

u20
G cos ( 2 + 1 )
2

avg ( uc Guc ) =

u20

u20 ab2 ( ab2 2 ( a + 2b))


2( a2 + 2 )( b2 + 2 ) 2

ab2

+ 2 ( b2 + 2 )

( cos 2 cos 1

sin 2 sin )
1

where

= atan

1 = atan
b
a
It follows from the analysis on page 302304 that the MIT rule gives a
stable system as long as < b while the stability condition for the SPR
rule is
r
a
b
<
a + 2b
with b = 10a we get

MIT = 10a
SP R = 2.18a

6.10 The adaptive system was designed for a process with the transfer function
G ( s) =
The controller has the structure
u = 1uc

b
s+a

( 1)

( 2)

2y

Solutions to Chapter 6

43

The desired response is


bm
s + am

Gm ( s) =

( 3)

Combining ( 1) and ( 2) gives the closed loop transfer function


Gcl =

b 1
s + a + b 2

Equating this with Gm ( s) given by ( 3) gives


b 1 = b m
a + b 2 = am
If these equations are solved for 1 and 2 we obtain the controller
parameters that give the desired closed loop system. Conversely if the
equations are solved for a and b we obtain the parameters of the process
model that corresponds to given controller parameters. This gives
a = am
b = bm

b /
/
m 2

The parameters a and b can thus be interpreted as the parameters of the


model the controller believes in. Inserting the expressions for 1 and 2
from page 318 we get

229 31 2

a( ) =
259 2
( +)

b( ) = 458
259 2

When

229
= 2.7179
31

we get a( ) = 0. The reson for this is that the value of the plant transfer
function
458
( )
G ( s) =
2
( s + 1)( s + 30s + 229)

is G ( 2.7179i) =
0.6697i. The transfer function of the plant is thus
purely imaginary. The only way to obtain a purely imaginary value of
the transfer function
b
G =
( )
s+a
is to make a = 0. Also notice that b( 2.7179i) = 1.8203 which gives
G ( 2.7179i) = 0.6697i. When = 259 = 16.09 we get infinite values of
a and b. Notice that G ( i 259) = 0.0587 that is real and negative. The
only way to make G ( i ) negative and real is to have infinite large values

44

Problem Solutions

of a and b. It is thus easy to explain the behavior of the algorithm from the
system identification point of view. The controller can be interpreted as
if it is fitting a model ( **) to the process dynamics ( *) . With a sinusoidal
input it is possible to get a perfect fit and the parameters are given by
( + ).

Anda mungkin juga menyukai