Anda di halaman 1dari 17

Optimal Feedback Nonlinear Control and Two Point

Boundary Value Problem for Systems with Parameter

M. POPESCU∗

Communicated by

Abstract. We consider the problem of optimum concerning to the minimization quadratic functionals of Bolza

type with constraints represented differential systems with parameter. One demonstrates the uniqueness of optimal

feedback nonlinear control obtained, by utilizing that the U Hilbert space control domain is rotund. The solution for

two point boundary value problem implies the determination of the solution of the system in variations associated

to the linearized system. The construction of the solution use of an iterative procedure, yielding the initial value

results of the adjoint variable. By presenting the state vectors x ∈ X and the control vectors u ∈ U in orthonormal

basis, one develops a numerical approximation method of the solution to the optimum problem.

Key words: Semigroup, variation system, two point boundary value problem, rotund space, orthonormal basis.

1. Introduction

An extended class of dynamic phenomena has for its objective the optimization of the performance indices ex-

pressed by Bolza type quadratic functionals. The quadratic linear problem to pertains of this class, for which con-

straints are represented by the controlled linear differential systems. Let one consider an undetermined quadratic

form of the solution to the Bellman equation. Thus, the Riccati matrix differential and algebraic equation is associ-
∗ Institute of Mathematical Statistics and Applied Mathematics of the Romanian Academy, P.O. Box 1-24, RO-010145, Bucharest, Romania,

e-mail: ima popescu@yahoo.com

1
ated to the quadratic linear problem which corresponds to the finite and infinite time respectively. The solution to

the Riccati equations is a positively semi defined matrix, being used in determining optimal control; the minimum

value of the functional and the stabilization of the linear systems (Riccati algebraic equation), [1], [9], [4], [20].

The extremization of the quadratic functional with constraints represented by nonlinear affine and bilinear systems

has been treated by the author on the hypothesis that the algebra generated by the vector field that defines the

nonlinear system form Lie algebras. At the same time, one considers that the operator defined by the set of Lie

parentheses is nilpotent [5], [7], [12], [13], [14]. The calculus of variation offers analytical methods for analysis

necessary and sufficient conditions of optimality. Thus, there have been determined the conditions of positivity of

the second variation of the Bolza functional of the final constraint manifold and the sufficient optimality conditions

consisting in the inexistence of points conjugated with the extremities along the optimum trajectory. The results

obtained are extended, for nonlinear systems with parameters, for the case of nonsingular and singular control [6],

[10], [10], [15], [16], [17].

The optimization of some dynamic processes determine the nonlinear feedback control by Taylor series devel-

opment, in the hypothesis that the state and control variables lies in the neighborhood of the origin [2], [3], [8],

[16]. The feedback nonlinear control can be obtained by eliminating the adjoint vector through repeated deriva-

tions with respect to time of the Hamiltonian. This procedure involves the solution to the quasi-linear system with

partial derivatives of the first order for the case of equal dimensions of the state and control vector [2], [16].

By considering the adjoint vector function to the state vector, the author has determined the fundamental solu-

tion in two points boundary value problem of the linear control systems and the existence sufficiently conditions

for the solution [19].

The present study treats the two points boundary value problem for the quasi-linear control systems with a

parameter. One writes the Boltza functionals in the Hilbert space L2 as a function of the quasi-linear system

solutions considered. Utilizing the minimum condition of these functional with respect to the control parameter u

one obtains the feedback nonlinear optimal control ũ.

Because the optimal feedback control belongs to the rotund space it follows that it is unique. One demonstrates

a fundamental result for the variations system used in determining the solution of the two points boundary value

2
problem and the construction of a field of extremals in the neighborhood of the optimal solution.

We consider {ei } n orthonormal basis for X and {di } be an orthonormal basis for U.

By writing the state vector function of the control u ∈ U and the orthonormal bases, one obtains an algorithm

which represents a numerical method of approximation to the solution for the optimum problem formulated. This

method was used by the author in minimizing energy in of orbital rendezvous problems [28].

2. The optimal control problem

We are concerned with a dynamical system governed by the following differential equation

ẋ = A x + B u + F(λ, t) , x(0, λ) = x0 (λ) ∈ H , t ∈ [0, t f ] (1)

where A ∈ D(A) ⊂ H −→ H , B : U −→ H are linear operators defined on the Hilbert spaces H (state space), U

(control space) λ ∈ [λ0 , λ f ] ⊂ Λ (parameter space), F function of the parameter and time.

Let X, Z be Hilbert spaces. We denote by L(X, Z) the Banach space of all linear bounded operators T : X −→ Z

endowed with the norm kTk = sup { |T x| : x ∈ X , |x| ≤ 1 }.

Definition 2.1. A strongly continuous semi group on X is a mapping: T : [0, ∞) −→ L(X), t −→ T(t) such

that

(i) T(t + s) = T(t) T(s) , ∀ t, s ≥ 0 T(0) = I,

(ii) T( · ) x is continuous for all x ∈ X.

Remark 2.1. kT( ·)k is locally bounded by the uniform boundedness theorem.

The infinitesimal generator A of T( · ) is defined by


( )
T(h) − I T(h) − I
D(A) = x ∈ X : ∃ lim ; A (x) = lim (2)
h→0+ h h→0+ h

We now consider a particular case for the system equation (1) defined by:

A ∈ M(n, n) , B ∈ M(n, m) are constant matrices, the state space E = Rn , the control space U = Rm , function

F with n components.

We assume that the cost functional; is of the form


Zt f h
1 i 1 
J= (Q x(s, λ), x(s, λ)) + (R u(s, λ), u(s, λ)) ds + G(t f ) x(t f , λ), x(t f , λ) (3)
2 2
0

3
where Q ∈ M(n, n) , R ∈ M(m, m) , G(t f ) ∈ M(n, n). are symmetric, nonnegative matrices and R is a positive

definite matrix.

We have

x(t, λ) = L2 [0, t f ] × L2 [λ0 , λ f ] = X (4)

Jt f can be written in terms of inner products in L2 .

1
1

Jt f = Q x, x +

R u, u (5)
2 2

The optimal control problem is formulated.

Determine u ∈ U defined on [t0 , t f ] × [λ0 , λ f ] such that the functional Jt f is minimized subject to the constraint

imposed by the model system (1).

3. Optimal non-linear feedback control

A generates a strongly continuous semi group T(t) = e t A on X. The solution of the system (1) is given by
Zt Zt
x(t, λ) = T(t) x0 (λ) + T(t − s) B u(λ, s) d s + T(t − s) F(λ, s) d s t ∈ [0, t f ] (6)
0 0

Let Et : X → X defined by
Zt
Et F = T(t − τ) F(λ, τ) d τ t ∈ [0, t f ] (7)
0
while

P(λ, t) = T(t) x0 (λ) + Et F t ∈ [0, t f ] (8)

so that the equation (6) can by written

x(λ, t) = Et B u + P t ∈ [0, t f ] (9)

Substituting this equation into the expression for Jt f gives

1 

Jt f = Et B u + P , Q Et B u + P + u, R u
 

(10)
2

and by re-arrangement

1 
∗ ∗ 
Jt f = B Et Q Et B + R u, u + 2 B∗ E∗t Q P, u + P, Q P


(11)
2

4
∂J
The minimizing of the cost functional J is given by ∂u = 0. Utilizing (9) one obtained optimal control

ũ = −R−1 B∗ E∗t Q x̃ t ∈ [0, t f ] (12)

where the optimal state x(t, λ) is denoted by x̃.

Of course, x̃ is unknown but this problem can be overcome by making use of the adjoint equation

Zt
E∗t Q x̃ = T∗ (t − s) Q x̃ d s t ∈ [0, t f ] (13)
0

4. Uniqueness of optimal control

One assumes that all the real parts νi (i = 1, . . . , n) of the eigenvalues of the matrix’s A are negative. Let ν < 0 the

greater between νi . For the solution (6) of the system (1) we have

Zt  
kx̃k ≤ ke k kx0 k +
At
ke A (t−s) k kB ũ(x, s)k + kF(x, s)k d s t ∈ [0, t f ] (14)
0

The norm of the matrix e A t may be increased thus

ke A t k ≤ N e ν t (15)

where N is constant.

From the continuity of the operator E∗t it follows that the latter is bounded for t ∈ [0, t f ], hence there exists a

constant M1 for which

kB u( x̃, s)k = k − B R−1 B∗ E∗t Q x̃k ≤ k − B R−1 B∗ k M1 kQk kx̃k


 
(16)

In case the mappings are from X to itself, we follow the usual convention and denote the set of all bounded

linear mappings Et : X → X by L(X). Thus L(X, X) = L(X). L(X) can be normed by defining kEt k =

sup { kE x k |x ∈ β (X) }, Et ∈ L(X), β unit balls in X.

Theorem 4.1. Let X be a Banach space then the normed space L(X) is complete and hence is also a Banach

space. Let B̄ = B R−1 B∗ operator defined by a normed vectorial space V. We have

n
X n
X !2
kB̄k = sup kB̄ zk = sup
2
= sup B̄∗ B̄ z, z

b̄i k zk (17)
kzk≤1 kzk≤1 i=1 k=1 kzk

5
or
n
X !1/2
kB̄k ≤ b̄2i k = M0 (18)
i, k=1

In the same way one obtains


n
X !1/2
kQk ≤ q2i k = M2 (19)
i, k=1

One assumes that there exists ρ = const so that kF(λ, t)k < ρ for |λ| < h. Thus relation (14) becomes
Zt
ρ ρ −ν t
!
νt νt
kx̃k ≤ N e kx0 k + N M e e −ν t kx̃k d s + − e N eνt (20)
ν ν
0

where we wrote M0 M1 M2 = M.

By multiplying (20) by e −ν t , we have


Zt
ρ  
e −ν t
kx̃k ≤ N kx0 k + N M e −ν t kx̃k d s + N 1 − e −ν t (21)
ν
0

For the evaluation of kx̃k one uses the following

Lemma 4.1. If functions ψ(t), g(t) ≥ 0 and k(t) ≥ 0, continuous over interval [0, t f ] satisfy the inequality in

this interval
Zt
ψ(t) ≤ g(t) + k(s) ψ(s) d s (22)
0
then in the same interval, there occurs relation
Zt Rt
k(τ) d τ
ψ(t) ≤ g(t) + g(s) k(s) e0 ds (23)
0

We take
ψ(t) = e −ν t kx̃k
ρ −ν t ρ
g(t) = N kx0 k − N e +N (24)
ν ν
k=NM
By applying the Lemma 4.1 and carrying out the calculations, it results


kx̃k ≤ (25)
−(ν + N M)

We assume condition ν + N M < 0 to be fulfilled. Thus out of (25) one obtains an upper limit of the optimal

state x̃. By using the expression (12) of the optimal control we have

kũk ≤ k − R−1 B∗ k kE∗t k kQk kx̃k (26)

6
By taking k − R−1 B∗ k ≤ M3 and by taking into consideration (16), (19) and (25) relation (26) becomes


kũk = M1 M2 M3 · = M̄ (27)
−(ν + N M)

with M̄ a real constant

Consider the set Ω ⊂ U of admissible control (each satisfying kuk ≤ M), Ω. is closed convex.

Theorem 4.2. (A Bang - Bang theorem).

Assume that the set Ω is a convex neighborhood of the origin and that ũ(t) is optimal control satisfying the

problem formulated above, then ũ(t) ∈ ∂ Ω from almost all t ∈ [0, t f ].

Rotund spaces. The space U is defined to be rotund if one of the following equivalent conditions is satisfied

(i) ku1 + u2 k = ku1 k + ku2 k ⇒ u2 = α1 u1 for some scalar α1 , 0

(ii) Each convex subset Ω of U has a t most an element satisfying kuk ≤ kxk , u ∈ Ω, ∀ x ∈ Ω.

Example of rotund spaces: Hilbert spaces, spaces l p , L p 1 < p < ∞, uniform convex space3s.

If the Banach space U1 , U2 , . . . , Un are rotund then the product space U1 × U2 × . . . × Un is rotund.

Theorem 4.3. Let the optimal control problem be as stated above and let the set of admissible controls Ω, then

the optimal control is unique.

Proof

Assume ũ, ū are two optimal controls. Therefore


Zt f Zt f
T(t) x0 (λ) + T(t − s) B ũ(λ, s) d s + T(t − s) F(λ, s) d s = x(t f , λ) (28)
0 0

Zt f Zt f
T(t) x0 (λ) + T(t − s) B ū(λ, s) d s + T(t − s) F(λ, s) d s = x(t f , λ) (29)
0 0
By adding equations (28) and (29), one obtains
Zt f Zt f
1
T(t) x0 (λ) + T(t − s) B (ũ + ū) d s + T(t − s) F(λ, s) d s = x(t f , λ) (30)
2
0 0

Then 1
2 (ũ + ū) is also an optimal control.

From the bang bang theorem above ũ, ū and 1


2 (ũ + ū) all belong to ∂ Ω for almost at t ∈ [0, t f ]. To say that

the set is rotund is equivalent to the statement that ũ, ū, 1


2 (ũ + ū) ∈ ∂ Ω ⇒ ũ = ū and uniqueness of the optimal

control follows 

7
5. Fundamental matrix of solutions for variation system

We consider the system


d y(t)
= f(y(t)) (31)
dt

where y is the state variable, f the function defined and continuous in D satisfying Lipschitz condition

| f(ȳ) − f(y) | < K | ȳ − y | (32)

with K a constant independent of y and defined on D.

In accordance with a known theorem, if the condition (32) is fulfilled from yn → y0 , it results y(t; t0 , yn ) →

y(t; t0 , y0 ), where y(t; t0 , y0 ) represent the solution of system (31).

Taking into account (32) one obtained

| y(t; t0 , yn ) − y(t; t0 , y0 ) | ≤ | yn − y0 | e K (t−t0 ) (33)

It results the uniform convergence with t, within the interval [t0 , t f ] on which the solutions are defined.

Let f(y(t)) differentiable in D for t ∈ [t0 , t f ]. For y0 ∈ D, t ∈ [t0 , t f ] we have

∂ f

f(y) − f(y0 ) = (y − y0 ) + O(|y − y0 |) (34)
∂ y y=y0

Theorem 5.1. If f is differentiable in D for t ∈ [t0 , t f ] and y(t; t0 , y0 ) ∈ D for t ∈ [t0 , t f ] is differentiable with
∂ y(t; t0 , y0 )
respect y0 and ∂ y0 is a fundamental matrix of solutions for the linear system

d δ y ∂ f(y(t; t0 , y0 ))
= δy (35)
dt ∂y

which represent the variation system corresponding to y(t; t0 , y0 ) solution.

Proof

Let Y(t, t0 ) a fundamental matrix for (35) where the columns of the matrix Y(t, t0 ) are solutions for the variation

system and Y(t0 , t0 ) = I (I is identifying matrix).

We consider y0 and y1 respectively the initial condition for (31), y0 , y1 ∈ D, t ∈ [t0 , t f ]. The systems of

8
solutions for system (31) can be written
Zt
y(t; t0 , y0 ) = y0 + f(y(τ; t0 , y0 )) d τ
t0
Zt (36)
y(t; t0 , y1 ) = y1 + f(y(τ; t0 , y0 )) d τ
t0

Because Y(t, t0 ) is a solution of the variation system we have


Zt
∂ f(y(τ; t0 , y0 ))
Y((t, t0 ) (y1 − y0 ) = y1 − y0 + Y(t, t0 ) (y1 − y0 ) d τ (37)
∂y
t0

From (36) and (37) are obtained

y(t; t0 , y1 ) − y(t; t0 , y0 ) − Y(t, t0 ) (y1 − y0 ) =


Zt n (38)
∂ f[y(τ; t0 , y0 )] o
f[y(τ; t0 , y1 )] − f[y(τ; t0 , y0 )] − Y(τ, t0 ) (y1 − y0 ) d τ
∂y
t0

Utilizing the differentiability assumption of f(y(t)) in D expressed by the relation (34) we obtain

y(t; t0 , y1 ) − y(t; t0 , y0 ) − Y(t, t0 ) (y1 − y0 ) =


Zt
∂ f[y(τ; t0 , y0 )] h i
y(τ; t0 , y1 ) − y(τ; t0 , y0 ) − Y(τ, t0 ) (y1 − y0 ) d τ+
∂y (39)
t0
Zt  
O y(τ; t0 , y1 ) − y(τ; t0 , y0 ) d τ
t0

From the relation (33) it result


   
O y(τ; t0 , y1 ) − y(τ; t0 , y0 ) = O y1 − y0 (40)

for t fixed, so that relation (39) leads to



y(t; t0 , y1 ) − y(t; t0 , y0 ) − Y(t, t0 ) (y1 − y0 ) ≤
Zt h i   (41)
K y(τ; t0 , y1 ) − y(τ; t0 , y0 ) − Y(τ, t0 ) (y1 − y0 ) d τ + O y1 − y0
t0

From relation (41) using a known inequality we have

 
y(t; t0 , y1 ) − y(t; t0 , y0 ) − Y(t, t0 ) (y1 − y0 ) = O y1 − y0 (42)

The theorem 5.1 has been demonstrated. 

By considering t0 = 0 we obtain

∂ yi (t)
" #
Y(t, 0) = (i, j = 1, 2, . . . , n) (43)
∂ y j (0)

9
6. Solving method for two point boundary value problem

The proposed optimum problem is equivalent with the determination of the control vector ũ ∈ U which minimizes

the cost functional (3) subject to the constraint (1). Then, the Hamiltonian H can be written

1 ∗ 1 i
H(x, p, u, λ, t) = x (t, λ) Q x(t, λ) + uT (t, λ) R u(t, λ) + p(t, λ) A x(t, λ) + B u(t, λ) + F(t, λ)

(44)
2 2

and the optimal control ũ ∈ L2 (0, t f ; u) is obtained from

Hu (x, p, u, λ, t) = 0 (45)

It result

ũ = −R−1 B∗ p

The Hamiltonian H̃ corresponding optimal control ũ is given by

1 ∗ 1
H̃ = x Q x + p∗ A x − p∗ B R−1 B∗ p + pT F (46)
2 2

The determination of the optimal solution results from integration of the system

∂ H̃
ẋ =
∂p (47)
∂ H̃
ṗ = −
∂x
From (47) we have
ẋ = A x + B R−1 B∗ p + F
(48)
ṗ = −Q x − A p ∗

with the boundary-conditions

x(0, λ) = x0 (λ) (49)

p(t f , λ) = G(t f ) x(t f , λ) (50)

A fundamental 2n × n matrix solutions Y(t) is obtained for the differential equations (48) with F = 0. Also a

single solution y(t) is obtained with the inhomogeneous equation (48). Thus (see Theorem 5.1) Y(t) is given by
 
 A B R−1 B∗ 
 
Ẏ =    Y(t) (51)
 ∗

−Q −A

10
Because x0 = const. for λ = λ0 = const. given and p0 is unspecified it result δ x0 = 0 (admissible trajectories),

respectively δ p0 = I such that it result


 
 
 0  l n
 
 
 · · ·  · · ·
Y(0) = Y0 = 
 
 (52)
 
I  l n

↔ n

The solution y(t, λ) is obtained from


   
−1 ∗  F(t, λ)
   
 A B R B   
ẏ =   y(t) + 
 

 (53)

−Q −A∗   0 

 
 
 x0 (λ) 
 
y(0, λ) =  · · ·
 

 (54)
 
 
0


We obtain    
 F(s, λ)
   
 x0 (λ)  
  Zt  
y(t, λ) = Y(t)  · · ·  + Y(t)
   
 Y−1 (s)  · · ·

 d s
 (55)

  0  
  
0 0


A general solution x and p to equation (48) that satisfies the initial boundary condition (49) can be written as
 
 x(t, λ) 
 
  = Y(t) p0 + y(t, λ) (56)
 
p(t, λ) 

where p0 is a set of n-constants that correspond to a certain choice of initial conditions for p(t, λ).

They are determined at the final time from the terminal boundary condition equation (50). The relation (50)

can be written as
   ∗

  

" #  G(t ) 

 f  
p∗ (t f , λ) + x∗ (t f , λ) G(t f ) =  =0
 
x (t f , λ) p∗ (t f , λ)  (57)
 ∗ 
 

 

I

  


 

such that from (56) the boundary condition (50) became


 
 
 I  h i
  Y(t f ) p0 + y(t f ) = 0 (58)
 
−G(t f ) 

11
The solution of this equation (58) determines the missing initial condition on p0 . A solution to two point bound-

ary value problem may be obtained by reintegrating the differential equations forward with the initial condition

x0 , p0 .

7. Field of extremals

If the right member of the differential system(48) allows continuous partial derivatives not only with respect to

variables x and p, but also with respect to parameter λ for λ ∈ [λ0 , λ f ], then its solution allows for continuous

partial derivatives with respect to initial values λ.

This case amounts to a system without an absent parameter which admits continuous partial derivatives with


respect to the initial values. Thus one completes system (48) of the 2n order with equation (2n + 1), dt = 0 and

the initial value λ = λ0 for t = t0 = 0. Then the solution of the system (48) is derivable with respect to the initial

dλ d λ0
values, particularly with respect to λ0 for any λ ∈ [λ0 , λ f ]. Since d λ0 = d λ0 = 1 the system in variations which

corresponds to (35) is
2
d δ yi X ∂ fi ∂ fi
= δ yi + i = 1, 2 (59)
dt j=1
∂ y j ∂λ

where
y = (y1 , y2 )∗ = (x, p)∗ = (x1 , . . . , xn , p1 , . . . , pn )∗
(60)
∂ yi
= δ yi
∂λ
In section 6 one presented a method of determining the initial value p of the adjoint variable for the two

point boundary value problem. Let a particular solution y(t, λ) of system (48) obtained for the initial values

t0 = 0, x(0, λ) = x0 (λ), p(0, λ) = p0 (λ), λ = λ0 on which we write

y1 = ϕ1 (t, λ) y2 = ϕ2 (t, λ) (61)

The variations are defined by the values of the derivatives with respect to parameter λ so that we get (

∂ ϕi (t, τ) ∂ yi
!
δ yi = = i = 1, 2

(62)
∂λ ∂λ

t=0, x (λ), p (λ)
0 0

At the same time, variations δ yi satisfy the system (59) where one writes
2
d δ yi X ∂ fi [ϕ1 (t, λ), ϕ2 (t, λ)] ∂ fi [ϕ1 (t, λ), ϕ2 (t, λ)]
= δ yi + i = 1, 2 (63)
dt j=1
∂ y j ∂λ

12
The initial conditions become

∂ x0 (λ) ∂ p0 (λ)
(δ y1 )0 = = x00 (λ) (δ y2 )0 = = p00 (λ) (64)
∂λ ∂λ

By integrating system (63) with the initial conditions (64) one obtains

 ∂ F(t, λ)
   
 0  
 x0 (λ) 
∂λ
 
  Z t  
δ yi = Y(t)  · · ·  + Y(t)
  −1
 
  Y (s)   ···  d s (65)
0
   
   
 0 
p0 (λ)  0
 

Thus if one knows a particular solution (61) one can determine the variations δ yi .

Proposition 7.1 If λ0 is the value of the parameter for which we have solution (61), then the solution corre-

sponding to the value λ0 + d λ of the parameter is obtained by Taylor series development, for which one retains

only the first degree terms in d λ.

∂ yi
!
yi (t, λ0 + d λ) = yi (t, λ0 ) + d λ = yi (t, λ0 ) + δ yi (t, λ0 ) d λ i = 1, 2 (66)
∂λ λ=λ0

Hence, the difference between solution (61) and a neighboring solution is written through the product between

variations δ yi and the differential of parameter λ. The results of this study can be extended without difficulty for

an arbitrary number of parameters (λ, µ . . .).

8. Numerical method for approximate solution

Let the state space X = Rn , the control space U be Hilbert space and linear transformation S : U → X. The

mathematical model can be postulated as x = S u where S is a linear operator. We consider {ei } n orthonormal

basis for X and {di } be an orthonormal basis for U. Define

D E n
X
u j = u, d j so that u = uj dj (67)
j=1

and define similarly

xi = hx, ei i (68)

Then
 n  n
X  X
x(λ) = S  u j d j  = uj S dj (69)
j=1 j=1

13
Taking inner products with ei one obtained

n
*X +
hx, ei i = u j S d j , ei (70)
j=1

which can be written


n
X D E
xi = u j S d j , ei (71)
j=1

or
n
 n
n X

X X D E
x= xi ei =  u j S d j , ei  ei

(72)
i=1 i=1 j=1

Denoting the scalars


D E
S d j , ei = α j i (73)

Then the mathematical model is defined by


 n
n X

X 
x=  u j α j i  ei

(74)
i=1 j=1

In the problem of the optimal control considered, we have

Z tf Z tf
x(t f , λ) = Φ(t f , 0) x0 (λ) + Φ(t f , τ) B u(λ, τ) d τ + Φ(t f , τ) F(λ, τ) d τ
0 0

where the matrix Φ(t f , τ) = T (t − τ) so that the control problem is to ensure that the following equation is satisfied:

Z tf Z tf
x̄(t f , λ) = x(t f , λ) − Φ(t f , 0) x0 (λ) − Φ(t f , τ) F(λ, τ) d τ = Φ(t f , τ) B u(λ, τ) d τ = S u (75)
0 0

with the control u that satisfies the equation and minimizes the given cost functional (3)

Each element (S u)i can be written

n Z
X tf h i
(S u)i = Φ(t f , τ) B uk (τ, λ) d τ (76)
ik
k=1 0

Let {ei } be the usual coordinate basis for X then

n
X n
X n Z
X tf h i
x̄ = S u = ei (S u)i = ei Φ(t f , τ) B uk (τ, λ) d τ (77)
ik
i=1 i=1 k=1 0

 
The rows of Φ(t f , τ) B i = 1, . . . , n are the functionals that form a basis {di } for space U.

14
9. Conclusions

The results of this study refer to the minimization of the Bolza quadratic functional, sunmitted restrictions repre-

sented but differential systems controlled with parameter. The feedback control determined is function by bounded

linear operator. One demonstrates that the control space is rotund which implies the need for optimal control.

One demonstrates that the control space is rotund, which involves the uniqueness of optimal control. Through the

mathematical methods used in this study one verifies the ” Banach - Steinhaus theorem” and ” The open mapping

theorem” referring to the bounding of the operator norm and existence of optimal control respectively. The de-

termination of the initial value of the adjoint variable through an interactive procedure allows the integration of

the system out of which one obtains the solution for the two-point value problem. The construction of the field

of extremals in the neighborhood of the optimal trajectory is achieved by solving the associated variations system.

The neighboring trajectory extremal class represents the relative extremal in the optimumum problem formulated.

The absolute extremum results from verifying the optimal sufficient conditions defined by the fact that there are no

points conjugated with the endemicities of the trajectory analyzed. This issue was treated by the author in [10] [15]

[16] [ 17]. Finally, there is presented a numerical method for determining an approximate solution to the optimum

control problem where the elements from the state and control spaces respectively are expressed in orthonormal

bases.

References

[1] B, R.: Dynamic Programming, Princeton University press (1977).

[2] B - S, H.: On applications of a new method for computing optimal nonlinear feedback

controls, Optimal Control Applications & Methods, Vol. 8, pp. 397 - 409 (1987).

[3] B - S, H. and F, M.: Optimal feedback control of nonlinear systems, Automatica,

Vol. 23, No. 3 pp. 365 - 372 (1987).

[4] D P, G.: Linear Control Theory for Infinite Dimensional Systems, A Agrachev Ed. ICTP Lecture Notes

pp 63 - 105 (2009).

15
[5] H, H.: Nilpotent and high order approximations of vector field systems, SIAM Reveiw vol. 33, pp 238

- 264 (1991).

[6] H, D. G., Optimal control theory for application, Springer-Verlag New York (2003).

[7] N, H.: Controlled invariance for affine control systems, Int. J. Constr. vol. 34 pp. 824 - 833 (1981).

[8] S, SM. and R, P.: Optimal nonlinear feedback regulation of spacecraft angular momentum, Optim

control appl. methods vol. 5, pp. 101-110 (1984).

[9] Z, J.: Mathematical Control Theory, An introduction Birkhauser (1996).

[10] P, M.: Singular normal extremals and conjugate point for Bolza functionals, Journal of Optimization

Theory and Applications Vol. 115, No. 2, pp 267-282 (2002).

[11] P, M.: Optimal singular control for controlled dynamical systems, Academic Ed. Bucharest (2002).

[12] P, M.: On minimum quadratic functional control of affine nonlinear systems, Nonlinear Analysis, vol.

36, pp 1165-1173 (2004).

[13] P, M.: Control of affine nonlinear systems with nilpotent structure in singular problems, Journal of

Optimization Theory and Application, Vol. 124, No. 2, pp 455 - 466 (2005).

[14] P, M. and P, F.: Courbes optimales pour une distribution affine, Bull. Sci. Math. vol. 129, pp.

701-725 (2005).

[15] P, M.: Sweep method in analysis optimal control for rendez-vous problems, Journal of Applied Math-

ematics & Computing, vol. 23, No. 1-2, pp. 243-256 (2007).

[16] P, M.: Variational and transithory proceses, nonlinear analysis in optimal control, Technical Ed,

Bucharest (2007).

[17] P, M.: Optimal control for quasi-linear systems with small parameter, Journal of Applied Mathematics

& Computing, vol. 27, No. 1-2, pp. 393 - 409 (2008).

16
[18] P, M.: Advances in Mathematical Problems in Engineering Aerospace and Sciences, Cambridge Sci-

entific Publishers, England vol. 2, (2008) Edited by S. Sivasundaram.

[19] P, M.: Fundamental solotion for linear two point boundary value problem, Journal of Applied Mathe-

matics & Computing, vol. 23, No. 5-6, pp. 385 - 394 (2009).

[20] P, M.: Stability and Stabilization of Dynamical Systems, Technical Ed., Bucharest (2009).

17

Anda mungkin juga menyukai