net/publication/269098257
CITATIONS READS
0 4,687
1 author:
SEE PROFILE
All content following this page was uploaded by Mandar Pandurang Ganbavale on 04 December 2014.
Submitted by,
Prof. A. Vasan
ii
CERTIFICATE
This is certified that the project entitled
Submitted by,
Prof. A. Vasan
(Department of Civil Engineering)
iii
ACKNOWLEDGMENT
Firstly, I would like to thank the curriculum of BITS, Pilani for giving me this
opportunity in doing Study oriented project.
I sincerely appreciate Prof. P. N. Rao for giving me opportunity to work on this topic. I
am greateful for his support and guidance that have helped me to expand my horizon of thought
and expression.
I am also thankful to my friends for their help in sharing the information about
various aspects of this project.
iv
Outline of the Proposed Topic of Project work
1
Introduction
Optimization problems are everywhere in academic research and real-world applications such
as in engineering, finance, and scientific areas. Wherever resources like space, time and cost
are limited, optimization problem arises. So, with no doubt, researchers and practitioners need
an efficient and robust optimization approach to solve problems of different characteristics that
are fundamental to their daily work, but at the same time, it is expected that solving a complex
optimization problem itself should not be very difficult. In addition, an optimization algorithm
should be able to reliably converge to the true optimum for a variety of different problems.
Furthermore, the computing resources spent on searching for a solution should not be
excessive. Thus, a useful optimization method should be easy to use, reliable and efficient to
achieve satisfactory solutions.
Initialization
Mutation
Cross-over
Selection
No Convergence? Yes
1. Sampling the search space at multiple, randomly chosen initial points i.e. a population
of individual vectors.
2. Since, Differential evolution is in nature a derivative-free continuous function
optimizer, it encodes parameters as floating-point numbers and manipulates them with
simple arithmetic operations such as addition, subtraction, and multiplication and
generates new points that are the perturbations/mutations of existing points. For this
differential evolution mutates a (parent) vector in the population with a scaled
difference of other randomly selected individual vectors.
2
3. The resultant mutation vector is crossed over with the corresponding parent vector to
generate a trial or offspring vector.
4. Then, in a one-to-one selection process of each pair of offspring and parent vectors, the
one with a better fitness value survives and enters the next generation.
This procedure repeats for each parent vector and the survivors of all parent-offspring pairs
become the parents of a new generation in the evolutionary search cycle. The evolutionary
search stops when the algorithm converges to the true optimum or a certain termination
criterion such as the number of generations is reached.
- Follows the principle of natural evolution and survival of the fittest that are described
by the Darwinian Theory.
- The major applications of evolutionary algorithms are in optimization, although they
have also been used to conduct data mining, generate learning systems, and build
experimental frameworks to validate theories about biological evolution and natural
selection, etc.
- EAs differ from traditional optimization techniques in that they usually evolve a
population of solutions or individual points in the search space of decision variables,
instead of starting from a single point. At each iteration, an evolutionary algorithm
generates new solutions which is also called as offspring by mutating and/or
recombining current or parent solutions and then conducts a competitive selection to
weeds out poor solutions.
- In comparison with traditional optimization techniques, such as calculus-based
nonlinear programming methods in, evolutionary algorithms are usually more robust
and achieve a better balance between the exploration and exploitation in the search
space when optimizing many real-world problems.
- Different main streams of evolutionary algorithms have evolved over the past forty
years. The majority of the current implementations descend from three strongly related
but independently developed branches:
o Evolution strategies
o Genetic algorithms, and
o Evolutionary programming.
- These approaches are closely related to each other in terms of their underlying
principles, while their exact operations and usually applied problem domains differ
from one approach to another.
- They are better suited for discrete optimization because the decision variables are
originally encoded as bit strings and are modified by logical operators.
- Evolution strategies and evolutionary programming, however, concentrate on mutation,
although evolution strategies may also incorporate crossover or recombination as an
operator.
- Evolutionary strategies is a continuous function optimizer in nature because it encodes
parameters as floating-point numbers and manipulates them by arithmetic operations.
3
Although differences exist among these evolutionary algorithms, they all rely on the
concept of a population of individuals or solutions, which undergo such probabilistic
operations as mutation, crossover and selection to evolve toward solutions of better fitness
in the search space of decision variables.
- The mutation introduces new information into the population by randomly generating
variations to existing individuals.
- The crossover or recombination typically performs an information exchange between
different individuals in the current population.
- The selection imposes a driving force towards the optimum by preferring individuals
of better fitness.
- The fitness value may reflect the objective function value and/or the level of constraint
satisfaction.
- These operations compose a loop, and evolutionary algorithms usually execute a
number of generations until the obtained best-so-far solution is satisfactory or other
termination criterion is fulfilled.
2. Differential evolution
Similar to other evolutionary algorithms, differential evolution is a population based,
derivative-free function optimizer. It usually encodes decision variables as floating-
point numbers and manipulates them with simple arithmetic operations such as
addition, subtraction, and multiplication.
The initial population {𝑋1,𝑖,0 = (𝑥1,𝑖,0 , 𝑥2,𝑖,0 , 𝑥3,𝑖,0 , … … . . , 𝑥𝐷,𝑖,0 )|𝑖 = 1,2,3, … . . , 𝑁𝑃} is
randomly generated according to a normal or uniform distribution
2.1.Mutation
At each generation g, this operation creates mutation vectors 𝑣𝑖,𝑔 based on the
current parent population {𝑋1,𝑖,0 = (𝑥1,𝑖,0 , 𝑥2,𝑖,0 , 𝑥3,𝑖,0 , … … . . , 𝑥𝐷,𝑖,0 )|𝑖 =
1,2,3, … . . , 𝑁𝑃} the following are different mutation strategies frequently used in
the literature,
4
Where,
𝑟0 , 𝑟1 , 𝑟2 = distinct integers uniformly chosen from the set {1,2, … . . , 𝑁𝑃}\{𝑖} ,
𝑋𝑟1,𝑔 − 𝑋𝑟2,𝑔 = difference vector to mutate the parent,
𝑋𝑏𝑒𝑠𝑡,𝑔 = best vector at the current generation
𝐹𝑖 = the mutation factor which usually ranges on the interval (0, 1+).
(𝑥𝑗𝑙𝑜𝑤 + 𝑥𝑗,𝑖,𝑔 )
𝑣𝑗,𝑖,𝑔 = 𝑖𝑓 𝑣𝑗,𝑖,𝑔 < 𝑥𝑗𝑙𝑜𝑤
2
(𝑥𝑗𝑢𝑝 + 𝑥𝑗,𝑖,𝑔 )
𝑣𝑗,𝑖,𝑔 = 𝑖𝑓 𝑣𝑗,𝑖,𝑔 > 𝑥𝑗𝑢𝑝
2
where 𝑣𝑗,𝑖,𝑔 and 𝑥𝑗,𝑖,𝑔 are the j-th components of the mutation vector 𝑣𝑖,𝑔 and the
parent vector 𝑋𝑖,𝑔 at generation g, respectively.
This method performs well especially when the optimal solution is located near or
on the boundary.
2.2.Cross-over
After mutation, a ‘binomial’ crossover operation forms the final trial vector
Where,
𝑟𝑎𝑛𝑑𝑗 (𝑎, 𝑏)= uniform random number on the interval (a, b) and newly generated
for each j,
𝑗𝑟𝑎𝑛𝑑 = 𝑗𝑟𝑎𝑛𝑑𝑖𝑎𝑛𝑡 (1, D) = integer randomly chosen from 1 to D and newly generated
for each i, the crossover probability, 𝐶𝑅𝑖 ∈ [0,1], roughly corresponds to the
average fraction of vector components that are inherited from the mutation vector.
In classic DE, 𝐶𝑅𝑖 = CR is a single parameter that is used to generate all trial vectors,
while in many adaptive DE algorithms, each individual I is associated with its own
crossover probability 𝐶𝑅𝑖 .
5
2.3.Selection
The selection operation selects the better one from the parent vector 𝑋𝑖,𝑔 and the
trial vector 𝑢𝑖,𝑔 according to their fitness values f (・). For example, if we have a
minimization problem, the selected vector is given by
6
7
Instructions for using the program
Variable definition
n = population size
itermax = maximum number of iterations
F = Scaling factor
Cr = Crossover rate
Lb = Lower bound
Ub = Upper bound
d = no. of variables
Sol = randomly generated population
Best = best member
fmin = minimum value of the objective function
V = mutated vector
g = constraints
8
We see that the value obtained for F = 0.5 and CR = 0.95 is the lowest i.e. fmin = 1.7251
9
Figure 2: Showing fmin for F = 0.5 and CR= 0.8
Problem 2
Multi-objective optimization
The optimal design of the three-bar truss shown in Figure is considered using two different
objectives with the cross-sectional areas of members 1 (and 3) and 2 as design variables.
ASSUMING VALUE OF H AS 1,
10
11
Instructions for using program
Variable definition
n = population size
itermax = maximum number of iterations
F = Scaling factor
Cr = Crossover rate
Lb = Lower bound
Ub = Upper bound
d = no. of variables
Sol1 = randomly generated population for population 1
Best1 = best member for population 1
fmin1 = minimum value of the objective function for population 1
V1 = mutated vector for population 1
g1 = constraints for population 1
Sol2 = randomly generated population for population 2
Best2 = best member for population 2
fmin2 = minimum value of the objective function for population 2
V2 = mutated vector for population 2
g2 = constraints for population 2
12
Results
The results shown below are for 250 iterations and a population size of 2000:
Hence it is seen that not much change is there in the values of fmin for both objective functions
even though the value of CR is changing.
13
14
Figure 3: The figures shows the results for F = 0.5 and CR = 0.8 for both the objective functions
15
Matlab code from Literature
clear all; close all; clc
%Function to be minimized
D=2;
objf=inline('4*x1^2-2.1*x1^4+(x1^6)/3+x1*x2-4*x2^2+4*x2^4','x1','x2');
objf=vectorize(objf);
%Initialization of DE parameters
N=20; %population size (total function evaluations will be itmax*N, must be>=5)
itmax=30;
F=0.8; CR=0.5; %mutation and crossover ratio
%Problem bounds
a(1:N,1)=-1.9; b(1:N,1)=1.9; %bounds on variable x1
a(1:N,2)=-1.1; b(1:N,2)=1.1; %bounds on variable x2
d=(b-a);
basemat=repmat(int16(linspace(1,N,N)),N,1); %used later
basej=repmat(int16(linspace(1,D,D)),N,1); %used later
%Random initialization of positions
x=a+d.*rand(N,D);
%Evaluate objective for all particles
fx=objf(x(:,1),x(:,2));
%Find best
[fxbest,ixbest]=min(fx);
xbest=x(ixbest,1:D);
%Iterate
for it=1:itmax;
permat=bsxfun(@(x,y) x(randperm(y(1))),basemat',N(ones(N,1)))';
%Generate donors by mutation
v(1:N,1:D)=repmat(xbest,N,1)+F*(x(permat(1:N,1),1:D)-x(permat(1:N,2),1:D));
%Perform recombination
r=repmat(randi([1 D],N,1),1,D);
muv = ((rand(N,D)<CR) + (basej==r)) ~= 0;
mux = 1-muv;
u(1:N,1:D)=x(1:N,1:D).*mux(1:N,1:D)+v(1:N,1:D).*muv(1:N,1:D);
%Greedy selection
fu=objf(u(:,1),u(:,2));
idx=fu<fx;
fx(idx)=fu(idx);
x(idx,1:D)=u(idx,1:D);
%Find best
[fxbest,ixbest]=min(fx);
xbest=x(ixbest,1:D);
end %end loop on iterations
[xbest,fxbest]
16
References
[1] Godfrey C.Onwubolu and Donald Davendra “Differential Evolution: A Handbook for
Global Permutation-Based Combinatorial Optimization” Studies in Computational
Intelligence, Volume 175.
[2] Kenneth V. Price · Rainer M. Storn Jouni A. Lampinen “Differential Evolution: APractical
Approach to Global Optimization”.
[5] Feifei Zheng, Angus R. Simpson and Aaron C. Zecchin “A coupled binary linear
programming-differential evolution algorithm approach for water distribution system
optimization” doi: 10.1061/(ASCE)WR.1943-5452.0000367.
[6] Wenyin Gong, Zhihua Cai, Li Zhu “An Efficient Multiobjective Differential Evolution
Algorithm for Engineering Design”
17