Anda di halaman 1dari 14

4.

Global Optimisation
Optimisation methods aim to find the values of a set of related variable(s) in the objective
function that will produce the minimum or maximum value as required. There are two types of
objective function, deterministic and stochastic. When the objective function is a calculated
value in the model (deterministic), we simply find the combination of parameter values that
optimise this calculated value. When the objective function is simulated random variable, we
need to decide on some statistical measure associated with that variable that variable that
should be optimised. .then the optimisation algorithm must run a simulation for each set of
decision variables values and record the statistic. There are many optimisation methods
available in the literature and implemented in the commercial software.
The history of global optimisation begins from 1970s simulation based optimisation
research due to the invention of the genetic algorithm by John Holland [1]. A genetic algorithm
is a class of population based adaptive stochastic optimization procedures, characterising the
randomness in the optimisation process. The randomness may be present as either noise in
measurements or Montecarlo randomness in search procedure or both. The basic idea behind
the genetic algorithm is to mimic a simple picture of the Darwinian natural selection in order to
find a good algorithm and involves the operation such as mutation, selection, and
evaluation of the fitness repeatedly.
A little later, in 1978, Aimo Trn introduced his Clustering algorithm of global
optimisation [2]. The method improves upon the earlier local search algorithms that needed
multiple start from several points distributed over the whole optimisation region. Clustering
algorithm avoids the drawback of the Multi-start (many starting points are used) converged to
same minimum. The Clustering method avoids this repeated determination of local minima.
This is realised in three steps, which may be iteratively used, (1) sample points in the region of
interest (2) transform the sample to obtain the points grouped around the local minima and (3)
use clustering technique to group these points. Starting a single local optimisation from each
cluster would determine the local minima and, thus also the global minimum.
Little later in 1983 another global optimisation algorithm namely Simulated annealing
method was proposed to mimic the annealing process in metallurgy by Kirkpatrick et al [3,4].
In annealing process a metal in the molten state(at very high temparature) is slowly cooled so

that the system at any time is approximately in thermodynamic equilibrium. As cooling


proceeds, the system becomes more ordered the liquid freezes or the metal recrystallizes and
attaining the ground state at T=0. The simulated annealing method optimisation makes very
assumptions regarding the function to be optimised and therefore , it is quite robust with respect
to irregular surfaces.
A little later, in 1986 Fred Glover [5] introduced his Tabu search method. This method in
some sense is close to Clustering algorithm. Glover attributes its origin to about 1977. The
basic concept of Tabu search is overall approach to avoid entrainment in cycles by forbidding
or penalising moves which take the solution, in the next iteration, to points in the solution space
previously visited (Hence tabu). The tabu method was partly motivated by the observation
that human behaviour appears to operate with random element that leads to inconsistent
behaviour given similar circumstances.
The pace of search in global optimisation by stochastic process accelerated considerably in
the 1990s. Marco Dorigo in his PhD thesis [6] introduced his Ant colony method of global
optimisation. It studies artificial system that takes inspiration from the behaviour of real ant
colony. Ants use pheromones that guide other fellow ants to identify the path that leads to a
success. The chemical properties of pheromones and the ability of ants to gather information
and use them are simulated in the Ant colony method to reach at the global optimum. This
method is well suited to combinatorial optimisation problems.
A couple of years later, in 1995 Jamed Kennedy and Russell Eberhart [7] introduced their
Particle Swarm method of global optimisation. In animal world we observe that swarm of
birds or insects or school of fish searches for food, protection, etc, in a very specific manner. If
one member of the swarm sees a desirable path to go, then the rest of the swarm will follow
quickly. The Particle swarm method mimics this behaviour. Every individual of the swarm is
considered as a particle in a multidimensional space that has a position and a velocity. These
particles fly through hyper space and remember the best position that they have seen. Members
of the swarm communicate good positions to each other and adjust their own position and
velocity based on these good positions. there are two main ways this communication is done
(1) swarm best known to all the particles (2) local bests are known in neighbourhoods of
particles.

The method of differential equation (DE), another global optimisation method, grew out of
Kenneth Prices attempts to solve Chebychev polynomial fitting problem in 1996[8]. The
crucial idea behind DE is a scheme for generating trial parameter vectors. Initially, a population
of points (p in d dimensional space) is generated and evaluated for their fitness. Then for each
point pi three different points pa, pb, pc are randomly chosen from the generated population. A
new population pz is subjected to a crossover with the current point p i with a probability of
crossover cr , yielding a candidate point, say pu is evaluated and if found
All population based methods of global optimisation have a characteristic of the
probabilistic nature inherited to them. As a result, one cannot obtain certainty in their results,
unless they are permitted to go for indefinitely large search attempts. Larger is the number of
attempts, greater is the probability that they would find out the global optimum, but even then it
would not reach at the certainty. Secondly, all of them adapt themselves to the surface on which
they find the global optimum. Each of these methods operates with a number of parameters that
may be changed at choice to make it more effective. This choice is often problem oriented and
for obvious reasons. A particular choice may be extremely effective in a few cases, but it might
be ineffective (or counterproductive) in certain other cases. Additionally there is a relation of
trade off among those parameters. These features make all these methods a subject of trail and
error.

4.1 Simulated Annealing Algorithm


Simulated annealing is an instance of a randomized search method and also sometimes called
as probabilistic search method. This algorithm searches the feasible set of an optimization
problem by considering random samples of points within the domain. The simulated annealing
algorithm was first suggested for optimisation by Kirkpatrick et al [3] based on technique of
Metropolis et al [9]. Suppose we wish to solve an optimization problem of the form
Minimise f ( x)
Subject to x
We assume that for any x , there is a set N ( x ) such that we can generate a random
sample from this set often a uniform distribution is used for the sampling. The algorithm for
simulated annealing is as follows.

Step 1: Set k= 0 ,select an initial point x0


Step 2: Pick a candidate point zk at random from N(xk)
k
k
k
Step 3: Toss a coin with probability of HEAD equal to min(1, e f ( z ) f ( x ) /T ),. If HEAD, then
set x(k+1)=z(k); else x(k+1)=x(k)
If the stopping criterion is satisfied, then stop
Step 4: Set k:=k+1, go to step2
where Tk represents a positive sequence called the temperature schedule or cooling schedule.
This form of acceptance is usually credited to Boltzmann. We can keep track of best so far
k
point xbest which, at each k, is equal to x (j), j 0,L , k , such that f(x(j))f(x(i)) for all

i 0,L , k . The best so far point can be updated at each step k as follows.

(k )
best

x ( k )

k 1
xbest

( k 1)
if f ( x ( k ) ) f ( xbest

otherwise

It is typical to let the temperature Tk be monotonically decreasing to 0. As the iteration index k


increases, the algorithm becomes increasingly reluctant

4.3 Particle Swarm Optimisation (PSO)


The PSO was developed by Kennedy and Eberhart [7]. It mimics the natural behaviour of a
swarm, e.g. swarm of birds, searching for a food source. In this interpretation the search of the
best available food source (i.e. optimum) is navigated based on the own memory of each
particle of the swarm (a bird) as well as the knowledge of the swarm as a whole. PSO is
naturally applicable to the continuous design spaces. In past several years, PSO has been
successfully applied in many research and application areas. It is demonstrated that PSO gets
better results in a faster, cheaper way compared with other methods by the explosive amount of
journal publication in this topic. Each particle represents a possible solution to the optimisation
task at hand. During each iteration, the accelerating direction of one particle determined by its
own best solution found so far and the global best position discovered so far by any particles in
the swarm. This means that if a particle discovers a promising a new solution, all the other
particles will move closer to it, exploring the region more thoroughly in the process. The basic
elements of standard PSO are briefly stated and defined as follows

Particle Xi (t), i = 1,,n: It is a potential solution represented by a n dimensional vector,


where n is the number of optimisation variables.
Swarm: It is a disorganised population of moving particles that tend to cluster together while
each particle seems to be moving in a random direction
Individual best position Pi (t), i = 1,, n: As a particle moves through the search space, it
compares the fitness value at the current position to the best fitness value it attained previously
Global best position, Pg (t) : It is the best position among all individual best positions achieved
so far
Particle velocity Vi (t), i = 1,, n : It is the velocity of the moving particles, which is
represented by a n dimensional vector. According to the individual best and global best
positions, the particles velocity is updated. After obtaining the velocity updating, each particle
position is changed to the next generation.

4.3.1. Classical Particle Swarm Optimisation


In a PSO of swarm size M, each individual is treated as a volume less particle in ndimensional space, with the position vector X i (t ) and velocity vector Vi (t ) of particle I
represented as

X i 0 X i min rand X i max X i min


Vi 0

X i min rand X i max X i min

position

t
X i (t ) X i1 (t ), X i 2 (t ),L , X in (t )

time

(4.1)

Vi (t ) Vi1 (t ), Vi 2 (t ),L ,Vin (t )

The particle moves according to the following equation


Velocity of
Particle i
time j

Curren
t

Particle
memory

Swarm
influence

at

Vij (t 1) wV
. ij (t ) c1.r1. Pij X ij (t ) c2 .r2 . Pgj X ij (t ) (4.2)

Inertia
Factor

Self
Confiden
ce

Swarm
Confidenc
e

X ij (t 1) X ij (t ) Vij (t 1)
For i 1,2, , M and j 1, 2,L , n . Parameters c1 and c2 are called acceleration coefficients and
satisfy c1 c2 4 to guarantee the convergence of the particles. Parameter w , which is the
inertia weight introduced to accelerate the convergence speed of the PSO. Vector
Pi ( Pi1 , Pi 2 , , PiD ) is the best previous position (the position giving the best fitness value)
experienced by particle i and is denoted pbest. Vector Pg ( Pg1 , Pg 2 , , PgD ) is the position of
the best particle (with best fitness value) among all the particles in the swarm and is denoted by
gbest. r1 and r2 are two different random numbers uniformly distributed within (0, 1). Empirical
studies show that the PSO performs well when w varies linearly from 0.9 to 0.4 over the run.

4.4 Genetic Algorithm Optimisation (GA)


The concept of the GA was developed by Holland and his colleagues in the 1970s
[18].Techniques of GA is inspired by the evolutionist theory explaining the origin of species. In
nature, weak and unfit species within their environment are faced with extinction by natural
selection. The strong species have greater opportunity to pass their genes to future generations
through reproductions. In the long run species carrying the correct combination in their genes
become dominant in their population. Sometimes, during the slow process of evolution, random
changes may occur in genes. If these changes create additional advantages in the challenge for
survival, new species evolve from the old species. By natural selection procedure, unsuccessful
species are eliminated.
In GA technique, a solution vector x is called an individual or chromosome. A chromosome
are made of discrete units called genes. Each gene controls one or more features of the
associated chromosome. In the original implementation of GA by Holland , genes are assumed
to be binary digits. Various researchers have introduced more varied type of genes. Normally a
chromosome represents the unique solution of the solution space. GA operate with a collection
of chromosomes called a population. The population is normally randomly selected. As the
search progresses, the population includes the fitter solutions and in the end it converges (it

converged to a single solution). Holland also presented a proof of convergence to the global
optimum value for the case of chromosome is binary vectors.
The GA uses two operators to generate new solutions from the existing ones by so called
crossover and mutation operations. The crossover operator is the most important operator of
GA. In crossover, generally two chromosomes called parents, are combined together to form
new chromosomes, called offspring. The parents are selected among the existing chromosomes
in the population with preference towards fitness so that offspring is expected to inherit good
genes from its parents. By iteratively applying the crossover operator, genes of good
chromosomes are expected to appear more frequently in the population, leading to convergence
to an overall good solution. The mutation operator introduces random changes into
characteristics of chromosomes. Mutation is generally applied at gene level of the chromosome.
New chromosome produced by mutation will not be significantly different from the original
chromosome; nevertheless mutation plays a critical role in GA. Mutation reintroduces genetic
diversity back into the population and prevents the solution to trap into the local optimum.
Reproduction involves selection of chromosomes for the next generation. In the most
general case, the fitness of an individual determines the probability of its survival for the next
generation. There are different selection procedures in GA in various literatures. Proportional
selection, ranking and tournament selection are most popular selection procedures [24].

4.4.1. Real Coded Genetic Algorithm


In real coded GA, real valued genes are used instead of conventional bit string encoding, i.e.
it treats chromosomes as vectors of real valued numbers. It then adopts the genetic operators of
the binary coded GA accordingly. In real coded GAs crossover has always been considered to
be fundamental search operator.
Selection and Crossover

i
i
i
i
j
j
j
j
Selection of two parents x1 , x 2 , x3 , , x n , x1 , x 2 , x3 , , x n for crossover is performed by

nonlinear ranking selection procedure [19]. In this procedure, with the population

Pop x 1 , x 2 , x 3 , , x p of p chromosomes, we distribute the probability to each chromosome


from the best to worst by a nonlinear function. So the selection probability of chromosome x i is

P ( x i ) q ' 1 q

'
where q

i 1

1 1 q

q is the selection probability of the best chromosome. After the selection probability of each
chromosome is determined, the roulette wheel selection is adopted to select the excellent
chromosome. This kind of selection procedure need neither use individual chromosomes fitness
nor transform the fitness scaling which can prevent the premature convergence. After the
~

selection, the offspring x i , x j are created using the following scheme [20-22].
~

x i ax i (1 a ) x j
~

x j ax j (1 a ) x i
Where a is random number between -0.5 and 1.5.
Mutation
A widely used mutation operator in real coded Genetic algorithm is Non Uniform Mutation
[23]. This mutation scheme of the algorithm is as follows. From a chromosome

x i x1i , x 2i , x3i , , x ni

i 1
i 1
i 1
i 1
i 1
the mutated chromosome x x1 , x 2 , x3 , , x n
is created as

follows.

i 1
j

x ij i, x uj x ij
i
i
l
x j i, x j x j

if
r 0.5
otherwise

Where i is the current generation number and r is a uniformly distributed random number
u
l
between [0, 1]. x j , x j are upper and lower bounds of the j th component of the mutated

chromosome respectively. The function (i, y ) given below takes values in the interval [0, y].

(i , y ) y 1 u

MaxIter

Where u is a uniformly distributed random number in the interval [0, 1], MaxIter is the
maximum number of iterations and b is a parameter, determining the strength of the mutation
operator. In Romara we set b = 5.
Local Technique
This technique helps to concentrate the points in the region S around the global minimum
[21]. The procedures of the local technique are as follows.
(1)Select a random number.

(2)Construct a train point x using the formula.


x j 1 j x best
j yj
j

j 1,2, , n

best
Where j is a random number in [-0.5, 1.5] and x j is the j th component of the best
best
chromosome x

worst
(3)Replace the worst point x
in S with x , if f ( x ) f ( x worst )

4.5. Constrained Optimisation problem


Many search and optimisation problems in science and engineering involve a number of
constraints which the optimal solution must satisfy. A constraint optimisation problem is
usually written as a nonlinear programming (NLP) problem.

min f ( x),

x Rn

g i ( x) 0, i 1,2, , m

Subject to

h j ( x) 0, j 1,2, , k
a i x i bi ,

1 i n

x x1 , x 2 , , x n

(*)

Where f(x) is an objective function, gi(x) and hj(x) are inequality and equality constaints
respectively, and ai and bi are the search space upper bound and lower bound respectively for
xi. The formulation of the constraints is not restrictive, since an inequality constraint of the
form gi(x) 0 can also be represented as g i(x) 0, and the equality constraint hj (x) = 0 is
equal to two inequality constraints gi(x) 0 and gi(x) 0. The most common approach to
solving the constraint optimisation problems is the use of a penalty function. The purpose of
using the penalty function is to transform the continuous non linear programming (CNLP)
problem to the unconstraint NLP (UNLP) problem by building a single objective function and
penalizing the constraints. Then we can minimize the new single objective function using the
unconstraint optimisation algorithm. This is main reason behind the concept of popular usage
of the penalty function approach. The drawback of this approach is the difficulty to select
suitable penalty values. If the penalty values are high the minimisation algorithms are usually
trapped in local minima and if the penalty values are low, they can barely detect feasible
optimal solutions.
Penalty Function Method
Various literatures have addressed this issue. One of the modifications recommended by [14] is
dynamically changing penalty values as the iteration progresses. The penalty function is
generally defined as follows [14]
F ( x) f ( x) h(t ) H ( x), x R n

Where f(x) is the original objective function of CNPL problem and h(t) is a dynamically
modified penalty value, t is the algorithms current iteration number and H(x) is a penalty factor
defines as
m

H ( x) q i x q i x

qi x

for where qi(x) = max {0, gi(x)}, i = 1, ..., m

i 1

The function qi(x) is a relative violated function of the constraints, qi x is a multistage


assignment function,

qi x

is the power of penalty function, and the qi(x) are constraints

described in (*). The function h(.), (.), and (.) are problem dependent. According to [14] the
values of qi x 1 ,if qi(x) < 1; otherwise qi x 2 . Additionally qi(x) < 0.001,then qi x 10

else if qi(x) < 0.1,then


function h(.) is set as

qi x 20

else if qi(x) < 1,then

qi x 100 otherwise qi x 300

the

h(t ) t t

The Gradient Repair Method


Gradient repair method proposed by Chootinan and Chen [25] utilises gradient information
obtained from the constraint set to systematically repair infeasible solutions by directing
infeasible solutions toward the infeasible region. The steps of gradient repair method is
summarised as below.
First determine the degree of constraint violation V, by equation (1)
g
Min 0, g ( x)
V m1 V

h( x )

hk 1

Where V consists of vectors of inequality constraints g and equality constraints h for the
problem.
Compute X V , where

X V are the derivatives of the constraints with respect to the

solution vector x is expressed by


V xV x x xV 1 V

Compute the Moore-Pensore inverse or pseudoinverse X V

which is the approximate

inverse of X V to be used.
Moore-Pensore inverse is defined as xV xV T xV xV T
1

Update the solution vector by x t 1 x t x x t xV 1 V x t xV V


Loop through steps 1 to 4
Where the parameter fmax(x) is the objective function value of the worst feasible solution in the
population. This technique can help the particles reach the feasible region of the search space,
and to properly keep the infeasible solution attracting the particles to the constraint boundary.
The

1. J. Holland, Adaptation in Natural and Artificial Systems, Univ. of Michigan Press,


Ann Arbor, 1975
2. A. A. Trn, A Search Clustering Approach to Global Optimization, in Dixon, LCW
and Szeg, G.P. (Eds) Towards Global Optimization 2, North Holland, Amsterdam,
1978
3. S. Kitrkpatrick, C. D. Gelatt Jr, and M. P. Vecchi, Optimization by Simulated
Annealing, Science, 220, 4598, 671 680, 1983
4. V. Cerny, Thermo dynamical Approach to the Traveling Salesman Problem: An
Efficient Simulation Algorithm, J. Opt. Theory Appl., 45, 1, pp. 41 51, 1985
5. F. Glover, Future paths for Integer Programming and Links to Artificial Intelligence,
Computers and Operations Research, 5, pp. 533 549, 1986
6. M. Dorigo. Optimization, Learning and Natural Algorithms (in Italian). PhD thesis,
Dipartimento di Elettronica, Politecnico di Milano, Milan, Italy, 1992
7. R. C. Eberhart and J. Kennedy, A New Optimizer using Particle Swarm Theory,
Proceedings of Sixth Symposium on Micro Machine and Human Science, pp. 39 43,
IEEE Service Centre, Piscataway, NJ, 1995
8. R. Storn and K. Price, Minimizing the Real Functions of the ICEC 96 Contest by
Differential Evolution, Proceedings of the IEEE International Conference on
Evolutionary Computation, May 1996, Nagoya, Japan
9. N. Metropolis, A. W. Rosenbluth, M. N. Rosenblueth, H. Teller and E. Teller, Equation
of state calculation by fast computing machines, Journal of Chemical Physics, 21 (6),
1953, 1087-1092
10. Dorigo, M. and Blum, C., Ant Colony Optimization theory: A Survey, Theoretical
Computer Science 344, 2005, 243-278
11. Socha, K. and Dorigo, M., Ant Colony Optimization for Continuous Domains,
European Journal of Operational Research 185, 2008, 1155-1173
12. Xiaoli Kou et al, Co-evolutionary Particle Swarm Optimization to Solve Constrained
problems, Computers and Mathematics with Applications, 57, 2009, 1776-1784
13. Zahara, E. and Chia-Hsin Hu, Solving Constrained Optimization Problems with
Hybrid Particle Swarm Optimization, Engineering Optimization, 40(11), 2008, 10311049

14. Jun Sun et al, Using Quantum Behaved Particle Swarm Optimization Algorithm to
solve Non-linear Programming Problems, International Journal of Computer
Mathematics, 84(2), 2007,261-272
15. Leandro dos Santos Coelho, A Quantum Particle Swarm Optimizer with Chaotic
mutation Operator, Chaos, Solitons and Fractals, 37, 2008, 1409-1418
16. Maolong Xi et al , Quantum- behaved Particle Swarm Optimization with Elitist Mean
Best Position, Complex Systems and Applications- Modelling, Control and
Simulations, 14(S2), 2007, 1643-1647
17. Leandro

dos

Santos

Coelho,

Gaussian

Quantum-behaved

Particle

Swarm

Optimization Approaches for Constrained Engineering Design Problems, Expert


Systems with Applications, 37, 2010, 1676-1683
18. Holland, J. H., Adaptation in Natural and Artificial Systems, Ann Arbor: University
of Michigan Press, 1975
19. Ren ZiWu, San Ye and Chen Jun Feng, Hybrid Simplex Improved Genetic Algorithm
for Global Numerical Optimisation, Acta Automatica Sinica, 33 (1), 2007, 91-95
20. Kaelo, P. and Ali, M. M., Integrated Crossover rules in real Coded Genetic
Algorithm, European Journal of Operational Research, 176, 2007, 60-76
21. Tsoulos, I. G., Modifications of Real Code Genetic algorithm for Global
Optimization, Applied Mathematics and Computation, 203, 2008, 598-607
22. Tsoulos, I. G., Solving Constrained Optimization Problems using a Novel Genetic
Algorithm, Applied Mathematics and Computation, 208, 2009, 273-283
23. Kusum Deep and Manoj Thakur, A new Crossover Operator for Real Coded Genetic
Algorithms, Applied Mathematics and Computation, 188, 2007, 895-911
24. Sokolov, A., and Whitely, D., Unbiased Tournament Selection, GECCO 2005,
Proceeding of the 2005 Conference on Genetic and Evolutionary Computation, NY,
USA
25. Chootinan, P. and Chen, A., Constraint Handling in Gnetic Algorithms uning a
Gradient-Based Repair Method, Computers and operations research,33(8), 2006,pp 2263-2281
26. Dong, Y., Tang, J. F., Xu, B. D., and Wang, D., An Application of Swarm
Optimisation to Nonlinear Programming, Computers and mathematics with
applications, 49 (11-12), pp 1655-1668
27. Deb, K., An Efficient Constraint Handling Method for Genetic Algorithms,
Computer methods in applied mechanics and engineering, 186 (2000), pp -318-338

28. Secrest, B. R. And Lamont, G. B. Visualising Particle Swarm Optimization


Gaussian Particle Swarm Optimization Proceedings of the IEEE swarm intelligence
symposium, pp 198-204, Indianapolis, IN, 2003
29. Krohling, R. A, Gaussian Swarm: A Noel Particle Optimization Algorithm,
Proceedings of the IEEE conference on cybernetics and intelligent systems, 2004, pp
372- 376, Singapore
30. Krohling, R. A., and Coelho, L. S., Particle Swarm with Exponential Distribution,
Proceedings of IEEE Congress on evolutionary computation, 2006, pp 5577 5582,
Vancover, Canada
31. Li, L., Yang, Y., Peng, H., and Wang, X., Parameters Identification of Chaotic
Systems via Chaotic Ant Swarm, Chaos and solitons and fractals , 2006, 28 (5), pp
1204- 1211
32. Liu, B., Wang, L., Jin, Y. H., Tang, F., Huang, D. X., Improved Particle Swam
Optimization Combined with Chaos, Chaos and solitons and fractals, 2005, 25 (5), pp
1261 - 1271

Anda mungkin juga menyukai