Student Thesis
Junsheng Su
September 22, 2016
Examiner:
Supervisor:
Contents
1 Introduction
1.1 Overview . . . . . . . . . . . . . .
1.2 Model Predictive Control . . . .
1.3 Piecewise Linear Approximation
1.4 Convex Polytopes . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
15
16
18
20
22
23
26
. . . .
. . . .
. . . .
Based
. . . .
. . . .
. . . .
. . . .
. . . .
. . .
. . .
. . .
On
. . .
. . .
. . .
. . .
. . .
6 Algorithm
6.1 Runco . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
6
7
9
11
30
30
30
33
33
33
35
36
37
40
40
40
Contents
.
.
.
.
41
43
44
47
7 Simulation Studies
7.1 Problem With 1-dimensional State And 1-dimensional Input
And Quadratic Cost Function . . . . . . . . . . . . . . . . . . .
7.2 Problem With 2-dimensional State And 1-dimensional Input .
7.3 Higher Dimensional Problem . . . . . . . . . . . . . . . . . . .
7.4 A Practical Model For Temperature Control . . . . . . . . . . .
50
56
6.3
6.4
6.5
6.6
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
50
51
53
54
List of Figures
1.1
3.1
3.2
3.3
CPLF . . . . . . . . . . . . . . . . . . . . . .
Upper Envelope Approximation . . . . . .
Comparison between UE Approximation
function in hyperplane x (2) = 5 . . . . . .
. . . . .
. . . . .
and the
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . .
. . . . . .
original
. . . . . .
20
21
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21
5.1
5.2
5.3
5.4
5.5
.
.
.
.
.
31
32
34
38
39
7.1
55
List of Tables
5.1
7.1
7.2
7.3
7.4
36
51
52
52
53
1 Introduction
1.1 Overview
Optimal control theory was raised by Pontryagin [23] and Bellman [1] to
optimize the process with a cost function as an index. Particularly, the
optimal control theory is aimed to solve the problem with linear dynamic
and quadratic cost function, by applying a simple solution in form of linear
state feedback derived by solving Algebraic Riccati Equation(ARE) [15].
However, the optimal control has difficulty in problems with non-linear
system or system with constraints on state and/or input, i.e. it is in most
cases impossible to get any analytic solution of optimal control problem for
these systems [16], while inputs and states of the most practical systems are
bounded. Thus, Model Predictive Control is designed to find a suboptimal
solution for such systems by calculating the optimal solution in a finite
horizon.
Model Predictive Control(MPC) ,also known as Receding Horizon Control(RHC), is a control method that reformulates a control problem with
infinite or longer time horizon into one with finite or shorter time horizon.
Moreover, it applies the optimal solution of the reformulated problem as
a current input. In regular MPC controller, the cost functions are positive
definite or even quadratic. MPC controller with a non positive definite
cost function is called Economic Model Predictive Control (EMPC). Robust
Model Predictive Control(RMPC) takes model uncertainties and focus on
ensuring the robustness of the system. Distributed Model Predictive Control
has advantages in dealing with large-scale applications, it is solved with
distributed or decentralized control schemes, where local control inputs are
computed using local measurements and reduced-order models of the local
dynamics [8].
There is since long a trade-off in MPC between computational complexity
and performance. In the last years, with the rapid growth of computational
capacity of the computers, MPC can be used to deal with many control
problems and becomes arguably the most widely accepted modern control
theory. However, there are still many control problems that the performance
quote:http://www.intechopen.com/books/advances-in-discrete-time-systems/discrete-timemodel-predictive-control
1 Introduction
minimize
J ( x,u) =
`( x (k),u(k)) + F ( x ( N ))
(1.2)
s.t.
x (k + 1) = f ( x (k),u(k)
x (k) X ; u(k) U
x( N ) X f X
x (0) = x0
x and u are the real states and the actual taken inputs. x and u are the
predicted states and strategies, x0 is the initial state at current step. The ` is
called running cost and F is called the terminal cost. The recursive feasibility
and asymptotical stability of the MPC problems should be guaranteed.Recursive feasibility indicates that, if there is a solution for its first step, i.e. there
1 Introduction
Definition 1.2. [4]A function f : X R,X R N is convex, if its domain X is
convex and for any R that 0 1 , x,y X , there is
f (x + (1 )y) f ( x ) + (1 ) f (y)
There has been much researches for fitting problems. The purpose of
fitting problems is to describe (fit) a point set X : ( x,y),x Rn ,y R with
a given group of functions f F : Rn R as precisely as possible. Generally speaking, there are two types of fitting: interpolation and smoothing.
Interpolation is one fitting method, which requires y = f ( x ),( x,y) X,
smoothing is more flexible, so that it could keep a higher degree of freedom.
The references for piecewise affine approximation is vast. In 1969 Bellman
and Roth [2] fit the curves by segmented straight lines. E. Sontag [27] began
to approximate non-linear systems with piecewise linear systems. Milsener
and Floudas [22] provided a literature review until 2010 in this field and
introduced a method of piecewise linear approximation that applies to 2and 3-dimensional non-linear functions. Azuma et al. [14] developed a
method called Lebesgue Piecewise Affine Approximation of Non-linear Systems. Chow et al. [18] proposed a procedure for obtaining piecewise affine
autoregressive exogenous (PWARX) models of non-linear systems. However
many of them are only applied to 1 or 2 dimensional situations. For example,
the piecewise affine interpolation mentioned in [10] is only valid for 2 dimensional functions. Most of the approximation methods are to solve fitting
problems that are dependent on the chosen points. This introduces a bias
and makes the quality of approximation strongly dependent on the points
we choose. Given a cost function f : Rn R and an ordered set of points
X = ( x1 ,x2 ,...,xm ) X ... X ,X Rn , define Data Point Set should be
an ordered set of points, that is, (X, f (X)) := {( x1 ,y1 ),( x2 ,y2 ),...,( xm ,ym )},
the piecewise affine approximating problem can be described as
minimize k F f k
Li ,li
s.t.
(1.3)
F Fp
F p = {h : Rn R| Li R1n ,li R,i N[1,p] :
10
(1.4)
(1.5)
11
1 Introduction
max(v) = max vi , i f v = (v1 , . . . ,vn )>
i N[1,n]
12
= ( A + BK ) x := Ak x i f x X f
13
minimize
J ( x,u) =
L( x (k), u(k)) + F ( x ( N ))
(2.2)
k =0
s.t.
x (k + 1) = Ax (k) + Bu(k)
x (k) X ; u(k) U
x( N ) X f X
x (0) = x0
The terminal cost function F : Rn R : max(Vk x + vk ) is calculated with
method introduced in [6]. In this thesis, the main focus is on the piecewise
affine approximation of the original cost function `.
14
15
i =1
i =1
i =1
x convh(X),
max(L[ x > ,1]> ) = u( x |(X, f (X)))
Define Um ( x ) = max(L[ x > ,1]> ) for x Rn .
Definition 3.1. Define a n-dimensional state space as below:
Let xi, min ,xi, max R,for i N[1,n] ,where xi, min < xi, max , ni 2 N,
xmax = [ x1, max ,x2, max ,...,xn, max ]> ,
xmin = [ x1, min ,x2, min ,...,xn, min ]> , n = [n1 ,n2 ,...nn ]>
16
G(i1 ,i1 ,...,in , :) =( x1, min + (i1 1) x1,d ,x2, min + (i2 1) x2,d , ...,xn, min
+ (in 1) xn,d ), i1 N[1,n1 ] ,...,in N[1,nn ]
and
xi,d =
A grid is handled as a set of n-dimensional points. Gi1 ,i1 ,...,in = G(i1 ,i1 ,...,in , :) is
its element.
Upper Envelope of the grid is denoted by uG () = u(|(G , f (G))).To
calculate the L in MatLab, the following algorithm is designed: for any
x G , and xi 6= xi, max , where xi is the i-th coordinate of x. Let xi+ =
( x1 ,x2 ,...,xi + xi,d ,...,xn ), obviously, ( xi+ ,uG ( xi+ )) (G ,uG (G)). The simplex
defined by n + 1 vertexes {( x,uG ( x )),( x1+ ,uG ( x1+ ),...,( xn+ ,uG ( xn+ ))} represents the Upper Envelope and is of dimension n. Then, it holds that
uG ( x ) = Ax + b for all x S, where A and b are uniquely determined (S is
the name of this simplex).
Proof. Denote UG = (uG ( x ),uG ( x1+ ,...,uG ( xn+ )), X = ( x,x1+ ,...,xn+ ). Let the
possible equality constraints for the simplex be y = Ax + b, ( x,y) are the
points in simplex,x Rn ,y R,A R1n ,b R, all the vertexes of the
simplex should fulfill this equality, denote B = b1(n+1) , there is:
> ,1n1 )> ,
UG = AX + B = ( A,b)(X
2 : X 2 (: ,i ) = (X
> ,1n1 )> (: ,i ) x, it is easy to see that
Define X
x1,d
..
2 =
X
.
xn,d
1
is of full rank, hence so is (X > ,1n1 ) and ( A,b) = (X > ,1n1 )> UG has the
only solution, that is, A and b are uniquely determined.
Proof completed.
17
So the fitting problem to be solved can be written in the form of a leastsquares problem:
minimize
J=
i N[1,m]
2
(max( a>
j xi + b j ) f i )
j =1
18
(0)
k N[1,j1]
(0)
{Pk }, then i
/ Pj .
(l )
Step 2
( a > x i + b f i )2 :
problem minimize
(l )
i P j
xi xi>
"
alj
blj
(l )
i P j
(l )
xi>
xi
(l )
|P j |
(l )
i P j
iP(j l )
fi
i P j
if
xi xi>
(l )
i P j
(l )
xi>
xi
(l )
i P j
(l )
|P j |
f i xi
(3.3)
(l )
i P j
i P j
term k a||22 + b2 can be added to the problem. In this step p ( a j ,b j ) pairs are
obtained.
Step 3 For the next iteration, there is defined a new partition
( l +1)
Pj
, if i
kN[1,j1]
( l +1)
{Pk
}, then
19
60
40
20
20
10
10
20
104
UE
original function
0.8
0.6
0.4
0.2
0
100
80
60
40
20
20
40
60
80
100
Figure 3.3: Comparison between UE Approximation and the original function in hyperplane x (2) = 5
As an example consider the approximation of the function
21
22
minimize
c p ,c N ,u p
s.t.
Li
c p + cN ,
(3.4a)
p =0
xp
up
+ li s p 0
Vk x N + vk c N 0
(3.4b)
(3.4c)
i N[1,r L ] ; k N[1,rV ]
x p X , u p U ,x N X f ,x0 = x0 ,
(3.4d)
x p+1 = Ax p + Bu p ,p N[0,N 1]
(3.4e)
N is the horizon of the MPC controller, c0 ,...,c N R. It can be seen that the
dimension of the parameter is (m + 1) N + 1.While the number of inequalities
for (1.2) would be decided by the original problem and the horizon and
there would be pr N + pn N inequalities for (3.4b) and (3.4c). Therefore it is
necessary to limit the number of affine functions for the piecewise affine
functions, otherwise, the computational complexity is probably too large to
be calculated. It is apparent that this method is also under negative influence
from the "curse of dimensionality" because of the discretization of state space.
23
with
AB
..
.
A N 1 B
..
.
0
..
0
B
and
A
A2
..
.
AN
Let
F = [01m ,11(n+1) ],
Lu
Lx B
0
Lu
0
0
..
.
..
.
Lu
VB
L AB
Lx B
L x = L(: ,1 : n),
x
Aineq11 =
,
..
..
..
.
.
.
L x A N 2 B
Lx B
VA N 1 B
VAB
1r L 1
..
Aineq12 =
, Aineq12 Rr L ( N +1)( N +1)
.
1r L 1
Aineq1 = [ Aineq11 ,Aineq12 ]
24
Aineq2 =
AU
..
0 Nr A ( N +1)
U
AU
>
>
Aineq = [ Aineq1
,Aineq2
]>
l Lx0
..
bineq = l LA N 2 x
0
v VA N 1 x
0
( bU ) N 1
Fz
Aineq z bineq
x 0 = x0
25
i N[1,M]
i N[1,M] pPi
Vi p + vi
(4.1a)
vi = 0
(4.1b)
subject to
Vj Ak p + v j Vi p vi + L( p, ( p)) 0
(4.1c)
i,j N[1,M] : p Pi :
Vi p + vi Vj p + v j
(4.1d)
where extreme() means the extreme points of a polytope. Problem 4.2 Let
Xif = { x Rn | Gi x gi },i N[1,M] .
minimize
ti
ti ,Vi ,vi ,Hit ,Hijd ,Hijc
i N[1,M]
subject to
26
Xif Ak1 X j 6= :
Gi
Hijd
= Vj ( Ak ) Vi + Li
G j Ak
g
Hijd i li + vi v j
gj
Hijc 0
i,j N[1,M] :
Hijc Gi = Vj Vi ,Hijc gi vi v j
Generally, computational complexity of Problem 4.2 can be very large. If
there are N (N > M) inequalities for all convh(Xif ), the maximal dimension
of Problem 4.2 could be M + Mn + M + N + 2N M2 + N M2 , the minimal
dimension is M + Mn + M + N + 2N M + N M2 if and only if for every
i N[1,M] , a j N1,M can always be produced, which leads to x
(4.3)
27
f (x)
max FiP x
c.
i N[1,m]
Proof. There is max FiP x > 0, for any x P \{0}.Because if it is not so,
i N[1,m]
there will be > 0,y = x, max FiP y 0,y P and P is not bounded.
i N[1,m]
There is max{
f (x)
| x
max FiP x
i N[1,m]
i N[1,m]
f (x)
max FiP x
i N[1,m]
f (x)
max FiP x
i N[1,m]
c.
i N[1,m]
Proof completed.
Theorem 4.2. Suppose Assumption 4 holds, Problem 4.1 has feasible solution
for
Xif = Pi X f ,
(4.4)
1
max `( x, ( x )), x P ,
1 xP
(4.5)
where `( x,u) is the running cost. According to Lemma 4.1, for x Xif ,
(1 ) FiP x `( x, ( x )). Define a function V : Rn R by V ( x ) = FiP x,
if x Xif . Assume x Xif Ak x X jf , it follows from Assumption 4 that
28
( + (1 )) FiP x
= 0.
Hence V ( x ) = FiP x is a feasible solution for Problem 4.1.
Proof completed.
This method is slightly modified from Theorem 5 in [6], in which the
construction of S is unnecessary. Moreover, it worth mentioning that
X f is independent from , but the largest positively invariant set w.r.t.
x (k + 1) = Ak x (k) considering the system constraints. Any further partition
Pij and Xijf = Pij X f based on (4.4), that is, if i N[1,rV ] ,j N[1,p p ] ,Pij
Pi ,int(Pij Pik ) = , j 6= k,
pp
S
j =1
ble.
29
5 Parameters
The controller is designed on Matlab with help of add-on mpt3 [17] and
cplex [11]. In this chapter we describe the parameters that are used in
algorithm.
5.2 And Pi
Lemma 5.1. = max(| Ak |), where Ak are the eigenvalues of Ak , is the smallest
possible , so that (4.3) is satisfied.
The proof that the choice = max(| Ak |) is possible can be found in
Lemma 1 of [3], with A = Ak and B = 0. And the proof that the choice
< max(| Ak |) is impossible can be found in [12], let Ak1 = Ak /, then
max(| Ak1 |) = max(| Ak /|) > 1.
However, max(| Ak |) is in most cases not a good choice for . Because it
introduces a large number of constraints in P , and therefore too many Pi .
The dimension of V would be too large, thus the dimension of problem (4)
would have too many constraints.
The terminal cost function is defined on P but its domain is X f . Hence
X f P should be satisfied, or the terminal cost function is not defined
for some x X f . This condition can be interpreted as FjP x 1, x X f .
Because P and X f are both convex, only the vertexes of X f are required to
be checked.
30
5.2 And Pi
In this thesis, Pi are chosen to be in form of a spider web structure
which have la layers. The Pi in 4.4 is further devided into Pij , so that
j 1
1 0
J ( x,u) = x T
x + u2
0 1
i =1
x (k) [100,100] [100,100]
5 u 5
The local
controller is calculated
by the function dlqr:
0.6917
0.3648
Ak =
and the largest eigenvalue of it is 0.2904. The
0.6167 0.2703
conservatism of the piecewise affine terminal cost calculated after Chapter 4
is remarked by the largest function value in terminal region. Again, because
31
5 Parameters
V(x) is a convex function and terminal region is a polyhedral which is also
convex, only the terminal cost of the vertexes of terminal region are required
to be checked. And the result is represented in Figure 5.2. The conservatism
is not monotonicallly increasing or decreasing to . Theoretically, a less
conservative terminal cost can result in a non-optimal solution, especially
when u weights more in cost function. On the other side, generally, the
smaller is, the larger the dimension of V is, and thus the longer online
calculation takes.
2.5
105
1.5
0.5
0
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
lambda
32
33
5 Parameters
(-2,4)
(2,4)
(-2,4)
(2,4)
(-1,1)
(1,1)
(0,0)
(0,0)
(a) Approximation with 3 data (b) Approximation with 5 data
points and 2 affine functions
points and 2 affine functions
34
35
5 Parameters
appropriately according to the number of affine functions. A data set, which
is too large increases the time cost and does not improve the performance,
but the results with too few data points are much worse. In order to ensure
the quality of the approximation, neither I1 nor I2 could be too few. They
are chosen to be I1 = I2 = 10 in the rest of this thesis. It also shows that the
approximation takes too much time with a large data size of 1600 or more.
pr does not only depend on the efficiency of the approximation, but also on
the efficiency of the final MPC control.
I1
10
10
2
5
10
20
50
50
1
10
10
1
10
I2
10
10
50
20
10
5
2
10
10
10
10
20
10
ds
11 11
11 11
21 21
21 21
21 21
21 21
21 21
21 21
41 41
41 41
41 41
41 41
41 41
pr
10
20
20
20
20
20
20
20
20
40
20
40
40
t
8
20
130
137
169
133
247
630
230
543
2043
1933
19332
bestrms
498
344
257/333/328/380/440/2681
276/255/253
269/245
379
285
257
250/339/369/256
144
237
241
118
The results strongly vary when I1 is too small, therefore the process should be repeated for
several times to show its average performance
36
{Gi1 ,...in , Gi1 +1,...,in , Gi1 ,i2 +1,...in , Gi1 ,...,in +1 } f or i1 ,...,in N[1,Nd 1] .
And the uniquely determined pair ( Ae ,be ) as described in Section 3.2 for
Pi1 ,...in can be calculated directly by MPT3. With Ae = ( ae,1 ,...,ae,n+1 ), note
that ae,n+1 6= 0, there is
f (x) =
Hence
LUE ( j, :) =
1
(( ae,1 ,...ae,n ) x be ).
ae,n+1
1
be
( a ,...ae,n ), lUE ( j) =
.
ae,n+1 e,1
ae,n+1
This is also a reason why a data set in grid form is more preferred: the
polyhedrons can be easily established.
The size of the convex hull of this data point set, which is a cube and
determined by a, should be chosen in this way, so that the ( LUE , lUE,j )
describe the original cost function better in the neighborhood of the origin.
Moreover, the accuracy of approximation may not be dependable if the value
of a is too high, because many rows of the result of CPLF ( LCPLF ,lCPLF )
will be covered by ( LUE , lUE ), i.e. for many i N[1,r L
] , LCPLF (i, : ) x +
CPLF
lCPLF (i ) max( LUE x + lUE , x X . However, a a that is too small might
make the function value in some region of neighborhood of origin too
small. In this thesis, ai , i N[1,n] are given to every dimension. Denote the
maximum of absolute value of x in i-th dimension as ri ,then ai are chosen to
be
r
ai = i1 .
prn
37
5 Parameters
by L = LCPLF LUE = ( Li ,li ),i N[1,sc +su ] . After the best result is obtained,
there can be some unnecessary ( Li ,li ), that
P = { x Rn |( Li L j ) x + li l j > 0} X =
(5.2)
1.5
0.5
0.5
100
100 100 80 60 40 20
20
40
60
38
80
100
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
100
50
50
100 100
50
50
100
39
6 Algorithm
The MPC controller design is divided into several parts. runco, space etc.
that are introduced below are parts of computer programs.
6.1 Runco
"Runco" is used to define the original cost function and it returns the cost
value according to given input and state. It is also referred as a calculation
of approximating running cost function and piecewise affine terminal cost
function for the original cost function. The inputs are x X and u U , the
output is y = `( x,u) R. In some situations, the input of runco can also be
x Ni , as introduced in Section 5.3.2, and the output `i
6.2 Space
"Space" is used to build the data point set of the variables, in our situation,
the states and the inputs. Inputs for this function are the possible values
of the data points x in form of n pan matrix, where pa = ( pan )n . As
mentioned in Section 5.3.3, a homogeneously distributed data set makes a
better performance. Hence for an n-dimensional variable, it can build a data
set in form of Definition 3.1 in Section 3.2. It gives a Rn pa matrix xs , where
xi, max xi, min
pa =
. The data point set of variables and function values
xi,d
i N[1,n]
xs (: ,i )
can be later interpreted as
, i N[1,pa ] , f ( x ) can be chosen
f ( xs (: ,i ))
according to our needs.
As a simple example, for n =
2, the possiblevalues for x1 are 1,0,1 and
1 0 1
for x2 1,0,1. The input is then
, and the output
1 0 1
1 0 1 1 0 1 1 0 1
.
1 0 1 0 1 1 1 1 0
40
0
1
p n1
pn
5 xUE :=
..
..
..
.
.
.
x
xn, max
n, min
0
1
1
pn
6
7
8
9
10
11
12
13
14
pn
( LUE ,lUE ) = UE( xUE ) % "UE" is the function block for Upper
Envelope and will be introduced in section 6.4
LUE := ( LUE ,lUE )
Rbest := + inf
for t = 1 to I1 %Iteration in Step 5, as mentioned in Section 5.3.3, on the
operating computer, the iteration times are chosen to be I1 = I2 = 10
do a is a vector of p different random integer numbers between 1
and pa
for i = 1 to p
do ui = X(: ,a(i ))
P := 0 pa p ,c := 01( p+1)
%cluster the data points into p groups according to (3.2) in Step 1;
for i := 1 to pa
41
6 Algorithm
15
16
17
18
19
20
21
22
23
24
25
min
j{1,...,p}
k u j xi k
26
do c( p i ) := c( j)
j =1
27
28
29
30
31
32
33
34
35
36
37
38
39
42
% The i-th group is then assigned as A(: ,Pnew ((c(i ) + 1) : c(i + 1)))
for t2 = 1 to I2 %Iteration in Step 4
do Calculate LCPLF = ( LCPLF ,lCPLF ) by (3.3)
%Step 2
Do line 11 to 22 again, replacing
line 12 with
x
x
Find m so that LCPLF (m, :) i = max (LCPLF ( j, :) i )
1
1
j{1,...,p}
%Step 3
for i = 1 to rlCPLF
do if lCPLF (i ) > 0
then
lCPLF (i ) = 0
LCPLF
L=
% combine the CPLF result with UE result
LUE
for i = 1 to pa
x
do y approxi (i ) = max(L i )
1
yi = A(n + 1,i )
R = k(y approxi y)/p0.5
a ||
if R < Rbest
40
superposition of two results make some lines in L useless, i.e. they are
"redundant" in the whole domain. To simplify this result, mpt3 is again
applied. If the Polyhedron
x
P = { x X |(L(i, :) L( j, :))
0, j = N1,r L ,j 6= i }
1
is not of full dimension, in Matlab P.isFullDim() == 0, then delete L(i, :).
xs (2,i )
...
xs (2,i )
10
then xn+1 = .
..
.
.
.
.
.
.
xs (n,i )
xs (n,i )
. . . xs (n,i ) + xn,d
% Every column of this matrix is one of the n + 1 vertexes of the n-dimensional simplex.
43
6 Algorithm
11
12
for j = 1 to n + 1
pa
13
s.t. k = 1
14
k 0
k =1
k =1
pa
15
16
17
18
19
20
21
22
23
44
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
AX
D0 =
AUK
b
b1 = X
bU
Pter =TerRe(D0 ,b1 ,K,Ak )%Terminal Region,its defined in the page 45
[ ,AP ,bP ]=TerRe(D0 ,b1 ,K,Ak /)
% "" means that the first output of the function is ignored
Obtain an array (V1 ,...,Vp ) containing the vertexes of Pter
1 = max(max( AP Vi ))
% 1 is designed after (4.5)
AP = AP /1
Build polyhedron P with constraints AP x bP
for i = 1 to row size of AP
do for j = 1 to row size of AP
do D ( j, :) = AP ( j, :) AP (i, :)
Build Polyhedron Pi with constraints Dx 0 and AP x bP
delete redundant constraints of Pi
% it can be done in Matlab by command "Pi .minHRep"
Get constraints of Pi A Pi x bPi
for i = 1 to row size of AP
do for j = 1 to la
do Ala (i1)+ j = A Pi
bla (i1)+ j = bPi
for k = 1 to row size of A Pi
do if bPi (k) 6= 0
Ala (i1)+ j
then Ala (i1)+ j = A Pi (k, :)
A Pi (k, :)
bla (i1)+ j
j
bla (i1)+ j =
la bPi ( k, : )
j 1
la bPi (k, :)
Build Polyhedron Pm,la (i1)+ j with constraints
Ala (i1)+ j x bPla (i1)+ j % Pm,i s are the partition
for terminal cost
45
6 Algorithm
24
25
26
27
Algorithm 4 TerRe
1
2
3
4
5
6
7
8
9
10
11
12
13
14
46
12
47
6 Algorithm
13
14
15
16
17
while k rVP
do if |VP (k, :)| 1e 3
then delete VP (k, :) % For VP (k, :) = 0,
%theoretically (4.1c) is fulfilled
with (4.1b), thus we delete the
points should be but may not actually
be at the origin to make the computation
easier and avoid the possible errors
else k = k + 1
Aeq,1 = [0rV ((n+1)(i1) , VP , 1rV 1 ,0rV ((n+1)( Mi) ]
P
18
19
20
21
22
beq
23
24
25
26
28
29
30
31
32
33
34
35
36
37
38
48
VP
do L (k) = max(Lbest
)
KVP
beq
=
%right side of inequality (4.1c)
L
Aeq = []
beq = 0r1
for i = 1 to r
27
Aeq
do Aeq =
[01((i1)la (n+1)+n) ,1,01((n+1)( M(i1)la 1)) ]
% equation (4.1b)
f = 01((n+1) M)
for i = 1 to M
do Get vertexes of Pm,i : VPm
rV := row size of VPm
k=1
while k rV
do if |VPm (k, :)| 1e 3
then delete VPm (k, :)
else k=k+1
for j = 1 to rV
do f = f + [01((i+1)(n+1) ,VPm ( j, :),1,01(( Mi)(n+1)) ]
39
40
41
42
43
44
45
46
47
48
49
50
51
for j = 1 to M
do Aeq,1 = [0rV ((n+1)(i1)) , VPm , 1rV 1 ,0rV ((n+1)( Mi)) ]
Aeq,2 = [0rV ((n+1)( j1)) ,VPm ,1rV 1 ,0rV ((n+1)( M j)) ]
Aeq
Aeq =
% inequality (4.1d)
Aeq,1+ Aeq,2
beq
beq =
0r V 1
Solve problem min y = f x
s.t. Aeq x beq
Aeq x = beq
Denote its solution as Vv
for i = 1 to size of Vv
do if |Vv (i )| 1e 3
then Vv (i ) = 0
rt = rVv /(n + 1)
for i = 1 to rt
V (i, :) = Vv (((i 1)(n + 1) + 1) : ((i 1)(n + 1) + n))
v(i ) = Vv ((i 1)(n + 1) + n + 1)
For the calculation of terminal cost function for the original problem, line
23 should be replaced with:
23
49
7 Simulation Studies
In this chapter the piecewise MPC method will be simulated in Matlab under
different conditions. Moreover, a MPC controller with original running cost
function and proper terminal cost function will be designed to conduct
comparison. The solver applied for quadratic problem is "cplexqp", for the
approximating problem "cplexlp" and for non-quadratic problem "fmincon".
The unit of time in this section is second (s). In this chapter, the running cost
functions are always decomposed into two parts, L( x,u) = L x ( x ) + Lu (u) =
L x [ x > ,1]> + Lu [u> ,1]> , and the row size of L x and Lu are denoted as r Lx ,
r Lu .
c( x,u) =
x (i )2 + 0.01u(i )2
i N
s.t.
x (k + 1) = 1.01x (k) u(k)
x X ,u U
X = { x R| 50 x 50}; U = {u R| 5 u 5}
, sampling time T = 0.01s. The local controller is obtained by LQR:K =
1.0001 and Ak = 0.0099 The terminal cost function for MPC with original
running cost function can be chosen as, Vlqr ( x ) = x T Px, with P = 1.011.
MPC controller is run until x X f . Denote cpl = c( xpl ,upl ) and tpl as cost
and running time with the piecewise affine cost function, co = c( xpl ,upl )
and to as cost and running with the original cost function note that u pl are
obtained with piecewise affine cost function, but c pl is still calculated with
the original cost function.
50
r Lu
7
4
4
4
4
4
la
5
1
1
1
5
5
N
10
10
10
10
10
20
c pl
2367.8
2367.8
5391.6
5391.6
7102
7102
t pl
0.3897
0.1965
0.2612
0.1780
0.3715
0.8428
co
2367.8
2367.8
5391.6
5391.6
7102
7102
to
0.1351
0.1251
0.1740
0.1719
0.1739
0.2414
x0
30
30
40
40
-45
-45
51
7 Simulation Studies
The controller with non-quadratic original cost function is also tested.
(k x k3 )3 is used as cost function here. The terminal cost function for the
original function is also calculated by the piecewise affine method. Therefore
two la s are needed here, la,pl is the number of layers for the approximated
function, and la,o for the original. In this example, the controller works
until all states and inputs converge to zero, because u = Kx is no longer the
optimal solution in terminal region.
r Lx
24
24
24
r Lu
4
4
4
la
5
5
5
hor
10
20
50
c pl
5008.8
5008.8
5008.8
t pl
1.2978
3.4388
14.2777
co
5008.8
5008.8
5008.8
to
0.1842
0.2288
0.9328
x0
(-50,25)
(-50,25)
(-50,25)
r Lu
4
4
4
4
hor
10
10
10
10
la pl
5
1
1
1
c pl
180220
180220
298.8
298.8
t pl
5.7889
2.7405
2.6825
5.9138
lao
5
1
5
5
co
180020
180030
289.1
289.1
to
7.2529
12.3374
12.3325
12.3325
x0
(-50,25)
(-50,25)
(-5,5)
(-5,5)
Table 7.3: Comparison between two methods,2D state, 1D input,nonquadratic cost function
As conclusion of Table 7.3, the approximating controller runs much faster
than the original one by non-quadratic problems. The approximating controller for higher-order cost function will lead to slightly higher cost than
the original one, and it is not effected by how well the cost function is
52
1 0
1 12 16
A = 0 1 12
B = 12 12
(7.1a)
0 1
0 0 1
(7.1b)
1 0 0
1
Q= 0 1 0
R=
0
0 0 1
r Lx
27
27
16
46
46
r Lu
14
14
8
24
24
hor
30
30
30
30
30
la pl
1
1
1
1
5
c pl
347.01
243.41
373.87
331.28
331.28
t pl
121.81
219.37
108.43
574.96
601.61
lao
5
5
5
5
5
0
1
(7.1c)
3
(7.1d)
co
333.82
241.08
333.82
333.82
333.82
(7.2)
to
1157
1084
1230
1230
1230
x0
(-4,-5,3)
(-3,4,2)
(-4,-5,3)
(-4,-5,3)
(-4,-5,3)
Table 7.4: Comparison between two methods,3D state, 2D input,nonquadratic cost function
In Table 7.4 it is shown that the performance of approximating controller
can be superior to the original controller, where the conservatism of the
terminal cost has no influence on it.We suggest that the reason for it is,
53
7 Simulation Studies
for some steps, the Matlab function "fmincon" has reached its maximum
calculation time and stopped searching the optimal solution. To confirm this,
Max Iteration times is risen from 10000 to 20000, but the cost remains the
same.
The cost function of (7.1d) is divided into 5 parts with elld (t) = |t|3 ,t R.
It is approximated with L1 ( x1 ) = max( L1i x1 + l1i ) `d ( x1 ), L2 ( x2 ) =
max( L2j x2 + l2j ) `d ( x2 ), ..., L5 (u2 ) = max( L5k u2 + l5k ) `d (u2 ), and each
approximating function for xi consists of 6 functions, each approximating
function for ui consists of 4 linear functions. As result the approximating
function for (7.1d) consists of 63 42 = 3456 affine functions. But the final
cost with this piecewise affine controller is 341, worse than the one with
piecewise affine function consisting of 46 24 affine functions.
54
40
with pwl cost function
QP control with R=1
QP control with R=10
QP control with R=100
38
Temperature/ C
36
34
32
30
28
26
24
20
40
60
80
100
120
140
160
180
200
220
240
260
280
300
t/s
Figure 7.1: Temperature control with piecewise affine cost function and with
different quadratic cost function
55
56
Eigenstndigkeitserklrung
Ich versichere hiermit, dass ich, Junsheng Su, die vorliegende Arbeit selbststndig angefertigt, keine anderen als die angegebenen Hilfsmittel benutzt
und sowohl wrtliche, als auch sinngem entlehnte Stellen als solche kenntlich gemacht habe. Die Arbeit hat in gleicher oder hnlicher Form noch
keiner anderen Prfungsbehrde vorgelegen. Weiterhin besttige ich, dass
das elektronische Exemplar mit den anderen Exemplaren bereinstimmt.
Ort, Datum
Unterschrift
57
Bibliography
[1] Richard Bellman. Dynamic Programming. Dover Publications, INC, 1
edition, 2003.
[2] Richard Bellman and Robert Roth. Curve fitting by segmented straight
lines. Journal of the American Statistical Association, 64:10791084, 1969.
[3] Franco Blanchini, Fouad Mesquine, and Stefano Miani. Constrained
stabilization with an assigned initial condition set. International Journal
of Control, 62:3:601617, 2007.
[4] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 7 edition, 2009.
[5] P.S. Bradley and O.L. Mangasarian. Constrained model predictive control:stability and optimality. Journal of Global Optimization, 16(1):2332,
2000.
[6] F. D. Brunner, M. Lazar, and F. Allgwer. Computation of piecewise
affine terminal cost functions for model predictive control. HSCC 14,
17:110, 2014.
[7] Eduardo F. Camacho and Carlos Bordons. Model Predictive Control.
Springer Verlag London, 2 edition, 2013.
[8] E. Camponogara, D. Jia, B.H. Krogh, and S. Talukdar. Distributed model
predictive control. IEEE Control Systems, 22(1):44 52, 2002.
[9] J. M. Carnicer. Multivariate convexity preserving interpolation by
smooth functions. Advances in Computational Mathematics, 3(4):395404,
1995.
[10] J. M. Carnicer and M. S. Floater. Piecewise linear interpolants to
lagrange and hermite convex scattered data. Numerical Algorithms,
13(2):345364, 1996.
58
Bibliography
[11] Adrian Curic. Using ibm cplex optimization studio with mathworks matlab. http://www-01.ibm.com/support/docview.wss?uid=
swg27043897&aid=1, 2014. Online,accessed 17-June-2016.
[12] E. G. Gilbert and K. T. Tan. Linear systems with state and control
constraints: The theory and application of maximal output admissible
sets. IEEE Transactions on Automatic Control, 36(9):10081020, 1991.
[13] Andreas Griewank. On stable piecewise linearization and generalized algorithmic differentiation. Optimization Methods and Software,
28(6):11391178, 2013.
[14] Shun ichi Azumaa, Jun ichi Imurab, and Toshiharu Sugie. Lebesgue
piecewise affine approximation of nonlinear systems. Nonlinear Analysis:
Hybrid Systems, 4(1):92102, 2010.
[15] RE Kalman. Contributions to the theory of optimal control. Bol. Soc.
Mat. Mexicana, 5:102 119, 1960.
[16] Basil Kouvaritakis and Mark Cannon. Model Predictive Control. Springer
International Publishing Switzerland, 1 edition, 2016.
[17] Michal Kvasnica. Mpt3 wiki. http://people.ee.ethz.ch/~mpt/3,
2014. Online,accessed 17-June-2016.
[18] Chow Yin Lai, Cheng Xiang, and Tong Heng Lee. Data-based identification and control of nonlinear systems via piecewise affine approximation. IEEE Transactions on Neural Networks, 22(12):2189 2200,
2011.
[19] Christian Lengauer. Loop parallelization in the polytope model. International Conference on Concurrency Theory, 715:398416, 1993.
[20] Alessandro Magnani and Stephen P. Boyd. Convex piecewise-linear
fitting. Optimization and Engineering, 10(1):117, 2008.
[21] R. Milman and E. J. Davison. A fast mpc algorithm using nonfeasible active set methods. Journal of Optimization Theory and Applications,
139(3):591616, 2008.
[22] R. Milsener and C. A. Floudas. Piecewise-linear approximations of multidimensional functions. Journal of Optimization Theory and Applications,
145(1):120147, 2010.
59
Bibliography
[23] L.S. Pontryagin, V.G. Boltyanskii, R.V. Gamkrelidze, and E.F.
Mishchenko. The Mathematical Theory of Optimal Processes. Gordon
and Breach Science Publishers, 1 edition, 1986.
[24] D. Q.Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert. Constrained model predictive control:stability and optimality. Automatica,
48(10):27212725, 2000.
[25] J. B. Rawlings and D. Q.Mayne. Model Predictive Control: Theory and
Design. Springer Verlag London, 2009.
[26] Stefan Richter, Colin Neil Jones, and Manfred Morari. Computational
complexity certification for real-time mpc with input constraints based
on the fast gradient method. IEEE TRANSACTIONS ON AUTOMATIC
CONTROL, 57(6):13911403, 2012.
[27] E. Sontag. Nonlinear regulation: The piecewise linear approach. IEEE
Transactions on Automatic Control, 26(2):346 358, 1981.
[28] Marco Storace and Oscar De Feo. Piecewise-linear approximation of
nonlinear dynamical systems. IEEE TRANSACTIONS ON CIRCUITS
AND SYSTEMSI: REGULAR PAPERS, 51(4):830842, 2004.
[29] A. Toriello and J. P. Vielma. Fitting piecewise linear continuous functions. European Journal of Operational Research, 219(1):8695, 2012.
[30] Yang Wang and Stephen Boyd. Fast model predictive control using
online optimization. IEEE Transactions on Control Systems Technology,
18(2):267 278, 2009.
60