Anda di halaman 1dari 18

# A sequential convex programming algorithm for minimizing a

## sum of Euclidean norms with non-convex constraints

Le Hong Trang ∗1,2, Attila Kozma1, Phan Thanh An2,3, and Moritz Diehl1,4
1
Electrical Engineering Department (ESAT-STADIUS) / OPTEC, KU Leuven, Kasteelpark Arenberg 10, 3001
Leuven-Heverlee, Belgium
2
Center for Mathematics and its Applications (CEMAT), Instituto Superior Técnico, Universidade de Lisboa,
A. Rovisco Pais, 1049-001 Lisboa, Portugal
3
Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Hanoi, Vietnam
4
Institute of Microsystems Engineering (IMTEK), University of Freiburg, Georges-Koehler-Allee 102, 79110 Freiburg,
Germany

Abstract
Given p, q and a finite set of convex polygons hP1 , . . . , PN i in R3 , we propose an approxi-
mate algorithm to find an Euclidean shortest path starting at p then visiting the relative bound-
aries of the convex polygons in a given order, and ending at q. The problem can Pbe rewritten as a
N
variant of the problem of minimizing a sum of Euclidean norms: minp1 ,...,pN i=0 kpi − pi+1 k,
where p0 = p and pN +1 = q, subject to pi is on the relative boundary of Pi , for i = 1, . . . , N .
The object function is convex but not everywhere differentiable and the constraint of the problem
is not convex. By using a smooth inner approximation of Pi with parameter t, a relaxation form
of the problem, is constructed such that its solution, denoted by pi (t), is inside Pi but outside
the inner approximation. The relaxing problem is then solved iteratively using sequential convex
programming. The obtained solution pi (t), however, is actually not on the relative boundary of
Pi . Then a so-called refinement of pi (t) is finally required to determine a solution passing the
relative boundary of Pi , for i = 1, . . . , N .
It is shown that the solution of the relaxing problem tends to its refined one as t → 0. The
algorithm is implemented by Matlab using CVX package. Numerical tests indicate that solution
obtained by the algorithm is very closed to a global one.

Keywords— Sequential convex programming, shortest path, minimizing a sum of Euclidean norms,
non-convex constrain, relaxation.

1 Introduction
The problem of minimizing a sum of Euclidean norms arises in many applications, including fa-
cilities location problem  and VLSI (very-large-scale-integration) layout problem . Many
numerical algorithms for solving the problem were introduced (see [11, 13, 14, 20], etc). To solve

email: le.hongtrang@esat.kuleuven.be

1
the unconstrained problems of minimizing a sum of Euclidean norms, Overton  gave an algo-
rithm which has quadratic convergence under some given conditions. Based on polynomial-time
interior-point methods, Xue and Ye  in 1997 introduced an efficient algorithm which computes
an ǫ-approximation solution of the problem. Some applications in the Euclidean single facility loca-
tion problems, the Euclidean multifacility location problems, and the shortest network under a given
tree topology, were also presented. Qi et al. [13, 14] proposed two methods for solving the problem.
A smoothing Newton method was introduced in 2000, the algorithm is globally and quadratically
convergent. In 2002 by transforming the problem and its dual into a system of strongly semi-smooth
equations, they presented a primal-dual algorithm for the problem by solving this system. For solv-
ing the problem of minimizing a sum of Euclidean norms with linear constraints, Andersen and
Christiansen  proposed a Newton barrier method in which the linear constraints are handle by
an exact penalty L1 . A globally and quadratically convergent method was recently introduced by
Zhou  to solve this problem. All these problems are convex.
In this paper we consider the problem of finding the shortest path starting a point p then visiting
the relative boundaries of convex polygons in 3D, denoted by hP1 , . . . , PN i, in a given order and
ending at q. The problem
PN can be rewritten as a variant of problem of minimizing a sum of Euclidean
norms: minp1 ,...,pN i=0 kpi − pi+1 k, where p0 = p and pN +1 = q are fixed, pi is on the relative
boundaries of the convex polygons, for i = 1, . . . , N. This problem is non-convex. Based on se-
quential convex programming approach , we introduce an approximate algorithm for solving the
problem. By using a smooth inner approximation of Pi with parameter t, a relaxation form of the
problem is constructed such that its solution, denoted by pi (t), is inside Pi but outside the inner ap-
proximation. The relaxing problem is then solved iteratively using sequential convex programming.
The obtained solution pi (t), however, is actually not on the relative boundary of Pi . Then a so-called
refinement of pi (t) is finally required to determine a solution passing the relative boundary of Pi , for
i = 1, . . . , N. It is also shown that the solution of the relaxing problem tends to its refined one as
t → 0. The algorithm is implemented by Matlab using CVX package. Numerical tests indicate that
solution obtained by the algorithm is closed enough to global one.
The rest of the paper is organized as follows. In section 2, we briefly recall the general frame-
work of sequential convex programming, some notations, and a method for approximating convex
polygons. Section 3 presents the formulations of the problem, and then introduces our new algo-
rithm. Section 4 gives analysis of proposed algorithm. In section 5, some numerical tests are given.
The conclusion is given in section 6.

2 Preliminaries
2.1 Sequential convex programming (SCP)
We now recall a framework of SCP . Consider the problem

 min cT x
s.t.

(1)

 g(x) ≤ 0,
x ∈ Ω,

2
where g : Rn → Rm is a nonlinear and smooth function on its domain, and Ω is a nonempty closed
convex subset in Rn .
The main challenge of the problem (1) is concentrated in the nonlinear of g(x). This can be
overcome by linearizing it at current iteration point while maintaining the remaining convexity of
the original problem. Let us assume that g(x) in twice continuously differentiable on its domain and
denote by λ the Lagrange multiplier of Lagrange function of (1). The full-step sequential convex
programming algorithm for solving (1) is given as follows.

## A general SCP framework

1. Choose an initial point x0 ∈ Ω and λ0 ∈ Rm . Let k = 0.

## 2. We solve iteratively the following convex problem

 min cT x
s.t.

 g(xk ) + ∇g(xk )T (x − xk ) ≤ 0,
x∈Ω

k
to obtain a solution z+ k
:= (xk+ , λk+ ). We check if kz+ − z k k ≤ ε holds for a given ε > 0, then
stop. Otherwise, z k+1 := z+
k
and k := k + 1.

2.2 Notations
We denote the Euclidean norm in Rn (n ≥ 2) with k · k. Let a ∈ R3 and A ⊂ R3 , the distance from
a to A is given by


d(a, A) = inf ka − a k .
′ a ∈A
3
Let A, B ⊂ R , the directed Hausdorff distance from A to B is defined by (see )

dH (A, B) = sup d(a, B) .
a∈A

Given a convex polygon P in R3 , we also recall the relative interior , denoted by riP , as
follows
riP = {p ∈ P : ∀q ∈ P, ∃λ > 1, λp + (1 − λ)q ∈ P }.
Then the set ∂P := P \ riP, is said to be relative boundary of the polygon P .
Let hP1 , . . . , PN i be a sequence of convex polygons in R3 and hp1 , . . . , pN i be a sequence of
points where pi ∈ Pi , for i = 1, . . . , N. We define a refinement of the sequence hp1 , . . . , pN i into
∂Pi as follows.

p′i ∈∂Pi

## is called the refined sequence of hp1 , . . . , pN i on ∂Pi .

3
Given a convex polygon P in R3 , a parameter t > 0, and p ∈ P , a function denoted by Φ(p, t) is
said to be an inner approximation of ∂P if the function satisfies the following:

## Φ(p, t) is a concave differentiable function ∀p ∈ P,

the set PΦ(p,t) := {p|Φ(p, t) ≥ 0} ⊂ P, (3)
SPΦ(p,t) → SP as t → 0 monotonously,

where SPΦ(p,t) and SP denote the areas of closed regions bounded by PΦ(p,t) and P , respectively. Such
approximation can be obtained by some methods, for example using KS-function  (described in
next subsection), convex function approximation , and soft-max .

## 2.3 Kreisselmeier-Steinhauser (KS) function

The KS function was first introduced by Kreisselmeier and Steinhauser . The function aims to
present a single measure of all constraints in an optimization problem. In particular, consider an
optimization problem containing m inequality constraints g(x) ≤ 0, the KS function overestimates
a set of inequalities of the form y = gj (x), j = 1, . . . , N. A composite function is defined as
m
1  X ρgj (x) 
fKS (x, ρ) = ln e ,
ρ j=1

where ρ is an approximate parameter. Raspanti et al.  have shown that for ρ > 0, fKS (x, ρ) ≥
max{gj (x)}. Furthermore, for ρ1 ≥ ρ2 , fKS (x, ρ1 ) ≤ fKS (x, ρ2 ). This implies that the fKS (x, ρ1 )
gives a better estimation of the feasible region of the optimization problem than fKS (x, ρ2 ). In the
following example, the application of KS function for some single-variable functions is visualized.
Consider two simple convex inequality constraints

g1 (x, y) = (x − 5)2 − y ≤ 0,
g2 (x, y) = x − y − 1 ≤ 0.

The KS function of the constraints is shown in Fig. 1. Because the convexity of KS function is fol-
lowed the original constraints, KS function is convex. Furthermore, fKS (x, ρ) tends to max{g1 (x), g2 (x)}
as ρ → ∞ . Intuitively, this means that fKS (x, ρ) approaches the boundary of the feasible region.

## 3 An SCP algorithm for finding the shortest path visiting the

relative boundaries of convex polygons
3.1 Relaxing form
Given p, q and a sequence of convex polygons hP1 , P2 , . . . , PN i in R3 , a formulation of the problem
of finding shortest path visiting the relative boundaries of convex polygons, can be given by
N
X
min kpi − pi+1 k (PMSN )
pi ∈∂Pi
i=0

4
30

g1(x) = (x−5)2
25
g2(x) = x−1
t = 0.1
20 t = 0.2
t = 0.5

15

y
10

−5
0 2 4 6 8 10
x

## Figure 1: KS function for simple constraints.

where p0 = p and pN +1 = q. There are two challenges for solving the problem. The first one is that
pi is constrained to be on ∂Pi , then this constraint is not convex. Second, the function presenting the
relative boundary is non-smooth at vertices of the polygons. Hence, the function is not differentiable
at vertices of the convex polygons. These can be overcome by using a relaxation of the constrains
of problem (PMSN ).
Our method computes first an inner approximation of each ∂Pi using a function Φi (pi , t) satis-
fying (3). An intermediate solution denoted by pi (t), can be then obtained by using the sequential
convex programming, described later, such that both following conditions are satisfied,

## Φi (pi , t) ≤ 0 and pi ∈ Pi , (4)

for i = 1, 2, . . . , N. We then decrease the value of t to get better approximation of Pi , this aims
to push pi (t) to ∂Pi . In Particular, pi (t) is generated such that pi (t) → p¯i (t) as t → 0, where
p̄i (t) ∈ ∂Pi is obtained by using (2), for i = 1, 2, . . . , N (see Proposition 6 later).
A geometrical interpretation of proposed method is shown in Fig. 2. Given two point p, q and a
convex polygons P1 , P2 , we seek a point pi ∈ ∂Pi such that kp − p1 k + kp1 − p2 k + kp2 − qk is
minimal, for i = 1, 2. Let us initialize t = t0 , an approximation Φi (pi , t0 ) of ∂Pi is computed. We
then determine pi (t0 ) such that Φi (pi (t0 ), t0 ) ≥ 0 and pi (t0 ) ∈ Pi by using SCP. We reduce t = tj
and compute iteratively pi (tj ), for j = 1, 2, . . ., until t is small enough. This means that pji will be
moved closer to ∂Pi step by step. A sequence of intermediate solutions pi (tj ) (j ≥ 0) is determined
such that Φi (pi (tj ), tj ) ≥ 0 and pi (tj ) ∈ Pi , for i = 1, 2.
Instead of solving directly (PMSN ), we solve a sequence of sub-problems as follows

minp1 ,...,pN N
 P

 i=0 kpi − pi+1 k
s.t.

(PRLX (p, t))

 Φi (pi , t) ≤ 0, i = 1, . . . , N,
pi ∈ Pi , i = 1, . . . , N,

where p0 = p and pN +1 = q. We now formulate (PRLX (p, t)) for a certain value of t. With given

5
p

p1(t0)
p1(t1)
P1
p1(t2)

Φ2(p2, t2) ≤ 0
P2 Φ2(p2, t1) ≤ 0

p2(t0)
p2(t1)
p2(t2)

## convex polygons Pi ⊂ R3 and pi ∈ Pi , for i = 1, . . . , N, this can be expressed by


Ai pi + bi ≤ 0, i = 1, . . . , N,
(5)
cTi pi + di = 0, i = 1, . . . , N,

where Ai and bi are respectively parameter matrix and vector. For each convex polygon Pi , a row
of Ai together a corresponded element of bi and cTi pi + di = 0 specify a line equation containing a
boundary edge of Pi . A numerical formulation of nonlinear optimization of problem (PRLX (p, t))
is thus given by  PN
 minp1 ,...,pN i=0 kpi − pi+1 k

 s.t.

Φi (pi , t) ≤ 0, i = 1, . . . , N, (PN LP (p, t))
A p + b ≤ 0, i = 1, . . . , N,

 i i i
cTi pi + di = 0, i = 1, . . . , N,

where p0 = p and pN +1 = q.

## Remark 1. Let f (p1 , p2 , . . . , pN ) = N

P
i=0 kpi − pi+1 k, where p0 = p and pN +1 = q, pi ∈ Pi for
i = 1, . . . , N. Then f is a convex function.

In problem (PN LP (p, t)), the first constraint is a concave. By using the SCP approach described
in subsection 2.1, we take the linearization of Φi (pi , t) at p¯i by

## Φi (pi , t) ≃ Φi (p̄i , t) + ∇Φi (p̄i , t)T (pi − p̄i ). (6)

6
The solution of the problem (PN LP (p, t)) can then be obtained approximately by solving a sequence
of the following subproblems

minp1 ,...,pN N
 P

 i=0 kpi − pi+1 k
 s.t.

Φi (pki , t) + ∇Φi (pki , t)T (pi − pki ) ≤ 0, i = 1, . . . , N, k = 0, 1, . . . , (PSCP (pk , t))
Ai pi + bi ≤ 0, i = 1, . . . , N,

T
ci p i + d i = 0, i = 1, . . . , N,

## 3.2 Approximating the convex polygons using the KS function

We can approximate the convex polygons in problem (PN LP (p, t)) using KS function described in
subsection 2.3. Let

## Ai := (Ai1 , . . . , Aimi )T , and bi := (bi1 , . . . , bimi )T

where mi is the number of edges of Pi , for i = 1, . . . , N. In order to use KS function, we first set
proper matrices Ai and bi then take
mi
X 
1
(Aij pi +bij )
Φi (pi , t) := t ln e t , (7)
j=1

where t is the approximate parameter, for i = 1, . . . , N. The problem (PN LP (p, t)) is then rewritten
as follows,
minp1 ,...,pN N
 P

 i=0 kpi − pi+1 k
 s.t.

Φi (pi , t) ≤ 0, i = 1, . . . , N, (PN LP−KS (p, t))
Ai pi + bi ≤ 0, i = 1, . . . , N,

cTi pi + di = 0, i = 1, . . . , N,

where p0 = p and pN +1 = q. Taking linearization of Φi (pi , t) gives the following problem in the
SCP approach.

minp1 ,...,pN N
 P

 i=0 kpi − pi+1 k
 s.t.

Φi (pki , t) + ∇Φi (pki , t)T (pi − pki ) ≤ 0, i = 1, . . . , N, k = 0, 1, . . . , (PSCP−KS (pk , t))
Ai pi + bi ≤ 0, i = 1, . . . , N,

T
ci p i + d i = 0, i = 1, . . . , N,

## 3.3 New algorithm

Given p, q and a sequence of convex polygons hP1 , P2 , . . . , PN i in R3 , Algorithm 1 solves itera-
tively (PN LP−KS (p, t)) to obtain a refined local approximate shortest path passing the ∂Pi by means

7
of numerical solution of the corresponding nonlinear programming problem, for i = 1, 2, . . . , N.
Namely, each step of the algorithm a local shortest path can be found with a certain value of t. The
path, however, passes in the relative interiors of the convex polygons, then the refinement (2) is
performed to ensure that it passes the relative boundaries of the polygons. By Definition 1 this is a
refinement of the resulting shortest path. Proposition 6 indicates that the shortest path tends to its
refinement as t → 0. Let us denote by hp1 (t), . . . , pN (t)i and hp̄1 (t), . . . , p̄N (t)i are respectively
the SCP solution of (PN LP−KS (p, t)) and its refinement with a certain value of t.

Algorithm 1 Finding the refined shortest path visiting the relative boundaries of convex polygons
Input: Given p, q and hP1 , P2 , . . . , PN i in R3 , an error of solution ε > 0, step length 0 < η < 1,
and a tolerance of SCP solver µ > 0.
Output: π(p, q) = hp, p̄1 (t), . . . , p̄N (t), qi where p̄i (t) ∈ ∂Pi , for i = 1, . . . , N.

## 1: Initialize t ← t0 > 0, and p0i ∈ Pi , for i = 1, . . . , N.

2: repeat
3: Approximate ∂Pi by Φi (pi , t) using (7), i = 1, . . . , N.
4: p(t) := (p1 (t), . . . , pN (t))T and T
 Φ(p, t) := (Φ1 (pi , t), . . . , ΦN (pi , t)) .
5: Call SCP S OLVER Φ(p, t), p to solve (PN LP−KS (p, t)) which gives hp1 (t), . . . , pN (t)i.
6: Refine hp1 (t), . . . , pN (t)i to obtain hp̄1 (t), . . . , p̄N (t)i to be on ∂Pi using (2).
7: p̄(t) := (p̄1 (t), . . . , p̄N (t))T
8: t ← ηt.
9: until kp(t) − p̄(t)k ≤ ε.
10: return π(p, q) := hp, p̄1(t), . . . , p̄N (t), qi.

## 11: procedure SCP S OLVER(Φ, p0 )

12: k ← 0.
13: repeat
14: Compute linearization of Φi (pi , t) at pki .
15: Solve convex subproblem (PSCP−KS (pk , t)) to obtain pk+1
i .
16: k ← k + 1.
17: pk ← (pk1 , pk2 , . . . , pkN )T .
18: until kpk − pk−1k ≤ µ.
19: return hpk1 , pk2 , . . . , pkN i.
20: end procedure

4 Algorithm analysis
Let p := (p1 , p2 , . . . , pN )T . We now analyze the algorithm. The following property shows the
feasibility of full SCP step of the Algorithm 1.
Proposition 1. For a certain value of t, if p′ (t) is a feasible point of the problem (PN LP−KS (p, t))
then a solution p∗ (t) of the problem (PSCP−KS (pk , t)) corresponding to p′ (t) exists and is feasible
for the problem (PN LP−KS (p, t)).

8
Proof. Since p∗ (t) is a solution of (PSCP−KS (pk , t)), Aip∗i (t) + bi ≤ 0 and cTi p∗i (t) + di = 0, for
i = 1, . . ., N. We need only to show that Φi p∗i (t), t ≤ 0, for i = 1, . . . , N. Indeed, since
Φi pi (t), t is concave, we have
T
Φi p∗i (t), t ≤ Φi p′i (t), t + ∇Φi p′i (t), t p∗i (t) − p′i (t) ,
  

## for i = 1, . . . , N. Furthermore, p∗ (t) is a solution of (PSCP−KS (pk , t)), then

T
Φi p′i (t), t + ∇Φi p′i (t), t p∗i (t) − p′i (t) ≤ 0.
 

## It follows that Φi p∗i (t), t ≤ 0. The proof is completed.



Proposition 2. Procedure SCP S OLVER gives a local solution of the problem (PN LP−KS (p, t)).
Proof. In order to solve problem (PN LP−KS (p, t)), procedure SCP S OLVER first converts the prob-
lem using extra variables y1 , y2 , . . . , yN which gives the following equivalent problem,
minp1 ,...,pN ,y1 ,...,yN N
 P

 i=1 yi
s.t.

kpi − pi+1 k − yi ≤ 0, i = 0, . . . , N − 1,

(8)

 Φi (pi , t) ≤ 0, i = 1, . . . , N,
Ai pi + bi ≤ 0, i = 1, . . . , N,

cTi pi + di = 0, i = 1, . . . , N.

this is a second-order cone program. By  the SCP framework in procedure SCP S OLVER gives
a local solution of the non-linear problem (8).
In vector form, let  
Φi (pi , t)
hi (pi , t) := ,
cTi pi + di
and  
maxj (Aij pi + bi )
gi (pi ) := ,
cTi pi + di
for i = 1, . . . , N, j = 1, . . . , mi , where mi is the number of edges of Pi .
Proposition 3. limt→0 khi (pi , t) − gi (pi )k = 0, for i = 1, . . . N.
Proof. The following is proven in ,
lim Φi (pi , t) = max {Aij pi + bij }, for i = 1, . . . , N.
t→0 j∈{1,...,mi }

Hence,
khi (pi , t) − gi (pi )k = |Φi (pi , t) − max {Aij pi + bij }|.
j∈{1,...,mi }

It follows that
lim khi (pi , t) − gi (pi )k = 0, for i = 1, . . . N.
t→0
The proof is completed.

9
This means that hi (pi , t) converges to gi (pi ) as t → 0, for i = 1, . . . , N. The following states
that the achieved convergence is uniform.

## Proposition 4. hi (pi , t) converges uniformly to gi (pi ) as t → 0, for i = 1, . . . N.

Proof. We have that Pi is compact. On the one hand, {Φ(pi , t)} is a sequence of continuous func-
tions and converges to maxj∈{1,...,mi } {Aij pi + bi } on Pi . On other hand, by Property 3 in  we
have
Φ(pi , t1 ) ≥ Φ(pi , t2 ), ∀pi such that t1 ≥ t2 .
By Theorem 7.13 in , we conclude that Φ(pi , t) converges continuously to maxj∈{1,...,mi } {Aij pi +
bij }, for i = 1, . . . , N, i.e., for every ǫ > 0, there exists T > 0, such that for every pi ∈ Pi and
t < T , we have
|Φ(pi , t) − max {Aij pi + bij }| < ǫ.
j∈{1,...,mi }

Since
khi (pi , t) − gi (pi )k = |Φi (pi , t) − max {Aij pi + bij }|,
j∈{1,...,mi }

khi (pi , t) − gi (pi )k < ǫ. It follows that hi (pi , t) converges uniformly to gi (pi ) as t → 0, for i =
1, . . . N.

Combining this proposition and Theorem 7.9 in , we have following corollary.

## lim sup {khi (pi , t) − gi (pi )k} = 0,

t→0 pi ∈Pi

T T
Furthermore, set ui := pix , piy , Φi (pi , t) and vi := pix , piy , maxj∈{1,...,mi } {Aij pi + bij } , then

## lim sup {kui − vi k} = 0, for i = 1, . . . , N.

t→0 pi ∈Pi

Under the uniform convergence of hi (pi , t) the following indicates that there is a corresponding
convergence of their graphs with respect to directed Hausdorff distance.

Proposition 5. limt→0 dH ∂PΦi (pi ,t) , ∂Pi = 0, for i = 1, . . . , N.
T
Proof. For pi ∈ Pi , taking ui := pix , piy , Φi (pi , t) ∈ ∂PΦi (pi ,t) , for i = 1, . . . , N, we have that
 
dH ∂PΦi (pi ,t) , ∂Pi = sup d(ui , ∂Pi )
ui ∈∂PΦi (pi ,t)

kui − vi′ k .

= sup inf (9)
ui ∈∂PΦi (pi ,t) vi ∈∂Pi

T
Set vi := pix , piy , maxj∈{1,...,mi } {Aij pi + bij } ∈ ∂Pi , for i = 1, . . . , N. By Corollary 1,

## lim sup {kui − vi k} = 0.

t→0 pi ∈Pi

10
This means that for every ǫ > 0, there exists t0 > 0, such that for t < t0 , we have

## sup {kui − vi k} < ǫ, for i = 1, . . . , N. (10)

ui ∈∂PΦi (pi ,t) ,vi ∈∂Pi

## Combining (9) and (10) gives


dH ∂PΦi (pi ,t) , ∂Pi ≤ sup {kui − vi k} < ǫ, for i = 1, . . . , N.
ui ∈∂PΦi (pi ,t) ,vi ∈∂Pi

We conclude that 
lim dH ∂PΦi (pi ,t) , ∂Pi = 0, for i = 1, . . . , N.
t→0

Step 6 of Algorithm 1 refines hp1 (t), . . . , pN (t)i to hp̄1 (t), . . . , p̄N (t)i which are on the relative
boundaries of the convex polygons Pi by (2). The following property indicates that the SCP solution
given by solving (PSCP−KS (pk , t)), tends to ∂Pi as t → 0, for i = 1, . . . , N. Hence, the resulted
path is a refined shortest path.

Proposition 6. Let us denote by pi (t) a SCP solution, for i = 1, . . . , N. We take p̄i (t) such that

## p̄i (t) = arg min{kp′i − pi (t)k},

p′i ∈∂Pi (t)

then
lim kpi (t) − p̄i (t)k = 0, i = 1, . . . , N.
t→0

Φi(pi, t′) ≤ 0

Φi (pi, t) ≤ 0

pi(t0)

pi(t)

p̄i(t)

p∗i

## Figure 3: pi (t) tends p̄i (t) ∈ ∂Pi as t → 0.

11
Proof. Since pi (t) is a SCP solution, for i = 1, . . . , N, Φi (pi (t), t) ≤ 0. Furthermore Φi (pi , t) is
′ ′
decreasing continuous

 with respect to t > 0, for i = 1, . . . , N. Then there exists t > 0 and t ≤ t
such that Φi pi (t), t = 0, i.e.,


T
pi (t) := pix (t), piy (t), Φi pi (t), t ∈ ∂PΦi (pi ,t′ ) ,

## (see Fig 3). On the one hand, by Proposition 5


lim d H ∂P Φ i (p i ,t′ ) , ∂Pi = 0,
t →0

## for i = 1, . . . , N. On the other hand,


kpi (t) − p̄i (t)k ≤ dH pi (t), ∂Pi

≤ dH ∂PΦ(pi ,t′ ) , ∂Pi ,

t→0

## Theorem 1. Given p, q ∈ R3 , a sequence of convex polygons hP1 , . . . , PN i in R3 , Algorithm 1 gives

a refined shortest path stating at p then visiting the relative boundary of Pi , for i = 1, . . . , N, and
ending at q in
 √ 1 t0 kp0 − p∗ k 
O N 3 N log2 ( ′ ) log 1 ( ) log 1 ( ) (11)
µ η ε µ α
time, where µ′ is tolerance of the solution of convex subproblem. p0 and p∗ are initial point and local
solution obtained by full step of procedure SCP Solver. α ∈ (0, 1) is a constant.

Proof. By the analysis of the Primal-dual potential reduction algorithm√for solvinga second-order
cone programming problem given section 4.4 of , it takes O N 3 N log( µ1′ ) time to solve
problem (PSCP−KS (pk , t)) in procedure SCP S OLVER. By Theorem 1 in  if we choose initial
points p0 is closed enough to the solution p∗ of( PN LP−KS (p, t)) then the convergent rate of full
step of SCP algorithm is linear. Let α be the linear contraction rate. Then the outer loop of the
0 ∗k 
procedure SCP S OLVER will take O log 1 ( kp −p α
) time. By the way, for an initial value t0 , the
µ
0 
outer loop of Algorithm 1 takes O log 1 ( tε ) time. Hence, (11) is followed.
η

5 Implementation
5.1 Numerical tests
We test Algorithm 1 with the convex polygons which are generated randomly in the square having
size of [−10, 10]. The initial point on each polygon is chosen randomly inside the polygons. The

12
algorithm was implemented in Matlab 7.13 (r2011b) and was executed in a laptop with Platform
Ubuntu 10.04, Intel(R) Core(TM)2 Duo CPU P9400 2.4 GHz, 2GB RAM. To solve the convex
subproblems, we use the CVX package 1 with Sedumi solver.
We first show a simple test with only one convex polygon in which three values of approximate
parameter of t are used for approximating the relative boundary of the polygon (see Fig. 4). We
start the algorithm with a large value of t, say t1 , then decrease into smaller values t2 and t3 . The
corresponding SCP solution p1 (ti ) thus approaches the relative boundary of the convex polygon, for
i = 1, 2, 3.

Convex polygon
Shortest path
p
10

6 p1(t2)

p1(t1)
4

2 p1(t3)

0 q
10
5 10
0
0
−5
−10 −10

## Figure 4: The behavior of iterations of our algorithm corresponding to three values of t.

Fig. 5 shows an example of our method which treats a problem including some parallel convex
polygons. In this example we choose arbitrarily initial points. The figure shows the behavior of last
iteration of our algorithm. The black circles on the figure show the track of full-step of SCP method.

1
Available at: http://cvxr.com/cvx

13
Convex polygons
Approximate solution

30

25

20

15

10

0 10
10 5
5
0
0
−5 −5
−10 −10

## Figure 5: An example testing with parallel convex polygons.

For solving relaxed problem (PRLX (p, t)), we can also use a Matlab function called FMINCON.
In order to compare with each other, the algorithm used in both SCP method and FMINCON is
interior point method. Table 1 shows the test results of a larger set of polygons. The tolerance for
solving the convex subproblems is µ = 10−5 . Run times shown in table indicate that our algorithm
runs faster compared to FMINCON function.

Table 1: Number of SCP iterations and run time comparing with FMINCON.

## FMINCON SCP method

N
Iterations Time (in sec.) Iterations Time (in sec.)
5 56 20.726550 17 5.827492
10 237 159.450801 22 11.83902
50 78 335.066874 18 29.713664
100 93 1180.509008 61 189.829125
1000 157 130658.840711 93 4053.933176

14
5.2 Global solution comparison
In order to appreciate the solution obtained by our method, we compare the solution with global one.
Let us recall the original problem
XN
min kpi − pi+1 k,
pi ∈∂Pi
i=0

where p0 = p and pN +1 = q. Then global solution of the problem can be computed naively by
solving a set of convex optimization problems including all problems in form of
N
X
min kpi − pi+1 k, (PSU B )
pi ∈ei
i=0

where ei is an edge of ∂Pi , for i = 1, . . . , N. Global solution of the original problem is best one
among all problems of (PSU B ). We use CVX package to solve the problem (PSU B ), then compare
with the solution of the same problem obtained by our method. The results of practical computation
show that solution obtained by our method is really closed to the global one. Fig. 6 shows an
example in which we consider problem consisting of five triangles (i.e., N = 5). Moreover, the time

Convex polygons
Approximate solution
30 Global solution

25

20

15

10

0 10
10 5 0
0 −5 −10 −10

Figure 6: The solution obtained by SCP method is closed to the global solution.

of computing global solution should be too largeas the number of convex polygons is large. This is
caused by a large number of problems (PSU B ). We test with several simple data sets in which convex
polygons are triangles.
Table 2 shows the values of objective functions and run time of methods as well. For each
problem, the values of objective function obtained respectively by two methods are very closed to

15
each other. The run time of finding global solution, however, is huge comparing with our method as
the number of triangles is increased.

Table 2: Run time (in seconds) of SCP method comparing with finding global solution.

## Numbers Global solution SCP solution

N
of (PSU B ) f (p1 , . . . , pN ) Time f (p1 , . . . , pN ) Time
1 3 11.2932852761 0.786699 11.2932852782 4.310667
2 32 25.3764906159 3.058703 25.3764906160 3.912728
3 33 21.4718456914 10.640263 21.4718457135 5.213360
4 34 30.6756720025 36.430685 30.6756717918 4.208676
5 35 38.5995405264 122.666906 38.5995404673 4.696589
6 36 37.1193471840 328.787751 37.1193471990 7.444731
7 37 51.9773250115 1142.205860 51.9773249957 8.114511
8 38 58.2935123031 3564.068292 58.2935122392 10.891497
9 39 67.3492330551 12043.575363 67.5274566902 8.045229
10 310 # # 59.8034756238 12.412469
#
Run time exceeded.

6 Concluding remarks
This paper proposed an approximate algorithm for solving a variant of problem of minimizing a sum
of Euclidean norms with non-convex constraints. The problem is to find an Euclidean shortest path
between two points visiting the relative boundaries of convex polygons in R3 . Based on KS function
we transformed the problem into a relaxed form. The algorithm solves alternatively a sequence of
the relaxed problems using a numerical optimization strategy called sequential convex programming
and then gives a refined solution. Our method was implemented in Matlab. The results of practical
computation indicated that solution obtained by our method is really closed to the global one.

Acknowledgement
This research was supported by Research Council KUL: PFV/10/002 Optimization in Engineer-
ing Center OPTEC, GOA/10/09 MaNet and GOA/10/11 Global real- time optimal control of au-
tonomous robots and mechatronic systems. Flemish Government: IOF/KP/SCORES4CHEM, FWO:
PhD/postdoc grants and projects: G.0320.08 (convex MPC), G.0377.09 (Mechatronics MPC); IWT:
PhD Grants, projects: SBO LeCoPro; Belgian Federal Science Policy Office: IUAP P7 (DYSCO,
Dynamical systems, control and optimization, 2012-2017); EU: FP7-EMBOCON (ICT- 248940),
FP7-SADCO ( MC ITN-264735), FP7-TEMPO, ERC ST HIGHWIND (259 166), Eurostars SMART,
ACCM.
The first author (the third author, respectively) thanks the financial supports offered by Por-
tuguese National Funds through FCT (Fundação para a Ciência e a Tecnologia) under the scope

16
of project PEst-OE/MAT/UI0822/2011 (National Foundation for Science and Technology Devel-
opment, Vietnam (NAFOSTED) under grant number 101.02-2011.45 and the Vietnam Institute for
Advanced Study in Mathematics (VIASM), respectively).

References
 J. Alpert, T. F. Chan, A. B. Kahng, I. L. Markov, and P. Mulet, Faster minimization of lin-
ear wirelength for global placement, IEEE Transaction on Computer-Aided Design Integrated
Circuits and Systems, 17, 1998, pp. 3-13.

 K. D. Andersen and E. Christiansen, A Newton barrier method for minimizing a sum of Eu-
clidean norms subject to linear equality constraints, Technical Report, University of Southern
Denmark, 1995.

 K. F. Bloss, L. T. Biegler, and W. E. Schiesser, Dynamic process optimization through adjoint
formulations and constraint aggregation, Industrial & Engineering Chemistry Research, 38(2),
1999, pp. 421-432.

 S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.

 J. Fliege and S. Nickel, An interior point method for multifacility location problems with for-
bidden regions, Studies in Location Analysis, 14, 2000, pp. 23-45.

 J. J. Koliha, Approximation of Convex Functions, Real Analysis Exchange, 29(1), 2003/2004,
pp. 465-471.

 C. Knauer, M. Lffler, M. Scherfenberg, and T. Wolle, The directed Hausdorff distance between
imprecise point sets, Theoretical Computer Science, 412, 2011, pp. 4173-4186.

 G. Kreisselmeier and R. Steinhauser, Systematic control design by optimizing a vector perfor-
mance index, in IFAC Symposium on Computer-Aided Design of Control Systems (Ed. M. A.
Cuenod), Zurich, Switzerland, Pergamon Press, Oxford, 1979, pp. 113-117.

 A. Kröller, T. Baumgartner, S. P. Fekete, and C. Schmidt, Exact solutions and bounds for
general art gallery problems, ACM Journal of Experimental Algorithmics, 17(1), Article 2.3,
2012.

##  M. S. Lobo, L. Vandenberghe, S. Boyd and H. Lebret, Applications of second-order cone

programming, Linear Algebra and Its Application, 284, 1998, pp. 193-228.

 M. L. Overton, A quadratically convergent method for minimizing a sum of Euclidean norms,
Mathematical Programming, 27, 1983, pp. 34-63.

 V. Polishchuk and J. S. B. Mitchell, Touring convex bodies - a conic programming solution,
in: Proceedings of the 17th Canadian Conference on Computational Geometry, 2005, pp. 290-
293.

17
 L. Qui and G. Zhou, A smoothing Newton method for minimizing a sum of Euclidean norms,
SIAM Journal on Optimization, 11(2), 2000, pp. 389-410.

 L. Qui, D. Sun, and G. Zhou, A primal-dual algorithm for minimizing a sum of Euclidean
norms, Journal of Computational and Applied Mathematics, 138, 2002, pp. 127-150.

 T. D. Quoc and M. Diehl, Local convergence of sequential convex programming for nonlinear
programming, in: Diehl, M.; Glineur, F.; Jarlebring, E.; Michiels, W. (Eds.), Recent advances
in optimization and its application in engineering, Springer-Verlag, 2010, pp. 93-102.

 T. D. Quoc and M. Diehl, Sequential convex programming methods for solving nonlinear
optimization problems with DC constraints, http://arxiv.org/abs/1107.5841v1,
2011, pp. 1-18.

 C. Raspanti, J. Bandoni, and L. Biegler, New strategies for flexibility analysis and design under
uncertainty, Computers and Chemical Engineering, 24, 2000, pp. 2193-2209.

##  W. Rudin, Principles of Mathematical Analysis, Third edition, McGraw-Hill, 1976.

 G. Xue and Y. Ye, An efficient algorithm for minimizing a sum of Euclidean norms with appli-
cations, SIAM Journal on Optimization, 7(4), 1997, pp. 1017-1036.

 G. Zhou, A quadratically convergent method for minimizing a sum of Euclidean norms with
linear constraints, Journal of Industrial and Management Optimization, 3(4), 2007, pp. 655-
670.

18