Anda di halaman 1dari 22

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/269098257

Differential Evolution algorithm for structural optimization using Matlab

Technical Report · May 2014


DOI: 10.13140/2.1.3239.5843

CITATIONS READS
0 4,687

1 author:

Mandar Pandurang Ganbavale


BITS Pilani, Hyderabad
6 PUBLICATIONS   0 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Mandar Pandurang Ganbavale on 04 December 2014.

The user has requested enhancement of the downloaded file.


A report on,

Differential Evolution using Matlab

Prepared in partial fulfillment of

Study oriented seminar,

Course code: CE G514


Structural Optimization

Submitted by,

Mast. Mandar Pandurang Ganbavale


(2013H143013H)

Under guidance and supervision of

Prof. A. Vasan

Birla Institute of Technology and Science, Pilani.


Hyderabad Campus.
(2013- 2014)
Differential Evolution
Using
Matlab

ii
CERTIFICATE
This is certified that the project entitled

“Differential evolution using Matlab”

Submitted by,

Mr. Ganbavale Mandar Pandurang


(2013H143013H)

In partial fulfillment of the requirements of course code CE G514 Study Oriented


project, BITS Pilani, Hyderabad campus for the academic year 2013-2014. It is the record of
their own work carried out under our supervision and guidance.

Prof. A. Vasan
(Department of Civil Engineering)

Birla Institute of Technology and Science, Pilani. Hyderabad Campus.


2013-14.

iii
ACKNOWLEDGMENT

Firstly, I would like to thank the curriculum of BITS, Pilani for giving me this
opportunity in doing Study oriented project.

I sincerely appreciate Prof. P. N. Rao for giving me opportunity to work on this topic. I
am greateful for his support and guidance that have helped me to expand my horizon of thought
and expression.
I am also thankful to my friends for their help in sharing the information about
various aspects of this project.

Mr. Mandar P. Ganbavale


(2013H143013H)

iv
Outline of the Proposed Topic of Project work

Name of Candidate : Mandar P. Ganbavale


ID No. : 2013H143013H
Place of Research Work and Organization : Department of Civil Engineering
BITS Pilani, Hyderabad Campus
Proposed Supervisor’s Details
Name : Dr. A. Vasan
Qualification : Ph. D
Designation : Professor
Organization : Department of Civil Engineering
BITS Pilani, Hyderabad Campus
Project topic
Differential Evolution using MATLAB.

Objective of the Proposed Project work


1. To study the multi-objective optimization technique Differential Evolution.
2. To study Application of this technique to civil engineering problems.
3. To apply this optimization technique to solve the problem having a 25 bar space
truss to minimize weight, deflection and maximize its frequency.

Work done till now……..


- Read literatures regarding what is Differential Evolution algorithm is exactly and how
it differs from GA.
- Read journals and papers regarding same.
- Read application of this algorithm in Welding problem and 2D truss problem done in
literatures.
- Read their MATLAB coding.

1
Introduction
Optimization problems are everywhere in academic research and real-world applications such
as in engineering, finance, and scientific areas. Wherever resources like space, time and cost
are limited, optimization problem arises. So, with no doubt, researchers and practitioners need
an efficient and robust optimization approach to solve problems of different characteristics that
are fundamental to their daily work, but at the same time, it is expected that solving a complex
optimization problem itself should not be very difficult. In addition, an optimization algorithm
should be able to reliably converge to the true optimum for a variety of different problems.
Furthermore, the computing resources spent on searching for a solution should not be
excessive. Thus, a useful optimization method should be easy to use, reliable and efficient to
achieve satisfactory solutions.

Differential Evolution (DE) is such an optimization approach that addresses these


requirements.
- Differential evolution algorithm is invented by Storn and Prince in 1995.
- Differential Evolution has been shown to be a simple yet efficient optimization
approach in solving a variety of benchmark problems as well as many real-world
applications.
- Differential evolution together with evolution strategies (ES’s) and evolutionary
programing (EP) can be categorized into a class of population –based, Derivative free
methods known as Evolutionary algorithm.
- All this approaches mimic Darwinian evolution and evolve a population of individuals
from one generation to another by analogous evolutionary operations such as mutation,
crossover and selection.

Steps involves differential evolution algorithm

Initialization

Mutation

Cross-over

Selection

No Convergence? Yes

1. Sampling the search space at multiple, randomly chosen initial points i.e. a population
of individual vectors.
2. Since, Differential evolution is in nature a derivative-free continuous function
optimizer, it encodes parameters as floating-point numbers and manipulates them with
simple arithmetic operations such as addition, subtraction, and multiplication and
generates new points that are the perturbations/mutations of existing points. For this
differential evolution mutates a (parent) vector in the population with a scaled
difference of other randomly selected individual vectors.

2
3. The resultant mutation vector is crossed over with the corresponding parent vector to
generate a trial or offspring vector.
4. Then, in a one-to-one selection process of each pair of offspring and parent vectors, the
one with a better fitness value survives and enters the next generation.

This procedure repeats for each parent vector and the survivors of all parent-offspring pairs
become the parents of a new generation in the evolutionary search cycle. The evolutionary
search stops when the algorithm converges to the true optimum or a certain termination
criterion such as the number of generations is reached.

Background of Differential evolution algorithm


1. Evolutionary algorithm

- Follows the principle of natural evolution and survival of the fittest that are described
by the Darwinian Theory.
- The major applications of evolutionary algorithms are in optimization, although they
have also been used to conduct data mining, generate learning systems, and build
experimental frameworks to validate theories about biological evolution and natural
selection, etc.
- EAs differ from traditional optimization techniques in that they usually evolve a
population of solutions or individual points in the search space of decision variables,
instead of starting from a single point. At each iteration, an evolutionary algorithm
generates new solutions which is also called as offspring by mutating and/or
recombining current or parent solutions and then conducts a competitive selection to
weeds out poor solutions.
- In comparison with traditional optimization techniques, such as calculus-based
nonlinear programming methods in, evolutionary algorithms are usually more robust
and achieve a better balance between the exploration and exploitation in the search
space when optimizing many real-world problems.
- Different main streams of evolutionary algorithms have evolved over the past forty
years. The majority of the current implementations descend from three strongly related
but independently developed branches:
o Evolution strategies
o Genetic algorithms, and
o Evolutionary programming.
- These approaches are closely related to each other in terms of their underlying
principles, while their exact operations and usually applied problem domains differ
from one approach to another.
- They are better suited for discrete optimization because the decision variables are
originally encoded as bit strings and are modified by logical operators.
- Evolution strategies and evolutionary programming, however, concentrate on mutation,
although evolution strategies may also incorporate crossover or recombination as an
operator.
- Evolutionary strategies is a continuous function optimizer in nature because it encodes
parameters as floating-point numbers and manipulates them by arithmetic operations.

3
Although differences exist among these evolutionary algorithms, they all rely on the
concept of a population of individuals or solutions, which undergo such probabilistic
operations as mutation, crossover and selection to evolve toward solutions of better fitness
in the search space of decision variables.
- The mutation introduces new information into the population by randomly generating
variations to existing individuals.
- The crossover or recombination typically performs an information exchange between
different individuals in the current population.
- The selection imposes a driving force towards the optimum by preferring individuals
of better fitness.
- The fitness value may reflect the objective function value and/or the level of constraint
satisfaction.
- These operations compose a loop, and evolutionary algorithms usually execute a
number of generations until the obtained best-so-far solution is satisfactory or other
termination criterion is fulfilled.

2. Differential evolution
Similar to other evolutionary algorithms, differential evolution is a population based,
derivative-free function optimizer. It usually encodes decision variables as floating-
point numbers and manipulates them with simple arithmetic operations such as
addition, subtraction, and multiplication.

The initial population {𝑋1,𝑖,0 = (𝑥1,𝑖,0 , 𝑥2,𝑖,0 , 𝑥3,𝑖,0 , … … . . , 𝑥𝐷,𝑖,0 )|𝑖 = 1,2,3, … . . , 𝑁𝑃} is
randomly generated according to a normal or uniform distribution

𝑥𝑗𝑙𝑜𝑤 ≤ 𝑥𝑗,𝑖,0 ≤ 𝑥𝑗𝑢𝑝 𝑓𝑜𝑟 𝑗 = 1,2,3,4,5, … . 𝐷

Where, 𝑁𝑃 = Population size


D = Dimension of the problem
𝑥𝑗𝑙𝑜𝑤 = Lower limit of jth vector component
𝑥𝑗𝑢𝑝 = Upper limit of jth vector component

After initialization, DE enters a loop of evolutionary operations: mutation, crossover


and selection.

2.1.Mutation

At each generation g, this operation creates mutation vectors 𝑣𝑖,𝑔 based on the
current parent population {𝑋1,𝑖,0 = (𝑥1,𝑖,0 , 𝑥2,𝑖,0 , 𝑥3,𝑖,0 , … … . . , 𝑥𝐷,𝑖,0 )|𝑖 =
1,2,3, … . . , 𝑁𝑃} the following are different mutation strategies frequently used in
the literature,

DE/rand/1 𝑣𝑖,𝑔 = 𝑋𝑟0 ,𝑔 + 𝐹𝑖 (𝑋𝑟1,𝑔 − 𝑋𝑟2 ,𝑔 )

DE/current-to-rest/1 𝑣𝑖,𝑔 = 𝑋𝑖,𝑔 + 𝐹𝑖 (𝑋𝑏𝑒𝑠𝑡,𝑔 − 𝑋𝑖,𝑔 ) + 𝐹𝑖 (𝑋𝑟1 ,𝑔 − 𝑋𝑟2,𝑔 )

DE/best/1 𝑣𝑖,𝑔 = 𝑋𝑏𝑒𝑠𝑡,𝑔 + 𝐹𝑖 (𝑋𝑟1,𝑔 − 𝑋𝑟2 ,𝑔 )

4
Where,
𝑟0 , 𝑟1 , 𝑟2 = distinct integers uniformly chosen from the set {1,2, … . . , 𝑁𝑃}\{𝑖} ,
𝑋𝑟1,𝑔 − 𝑋𝑟2,𝑔 = difference vector to mutate the parent,
𝑋𝑏𝑒𝑠𝑡,𝑔 = best vector at the current generation
𝐹𝑖 = the mutation factor which usually ranges on the interval (0, 1+).

In classic DE algorithms, Fi = F is a single parameter used for the generation of all


mutation vectors, while in many adaptive DE algorithms each individual i is
associated with its own mutation factor Fi. The above mutation strategies can be
generalized by implementing multiple difference vectors other than 𝑋𝑟1 ,𝑔 − 𝑋𝑟2 ,𝑔 .
The resulting strategy is named as ‘DE/–/k’ depending on the number k of difference
vectors adopted. The mutation operation may generate trial vectors whose
components violate the predefined boundary constraints. Possible solutions to
tackle this problem include resetting schemes, penalty schemes, etc. A simple
method is to set the violating component to be the middle between the violated
bound and the corresponding components of the parent individual, i.e.

(𝑥𝑗𝑙𝑜𝑤 + 𝑥𝑗,𝑖,𝑔 )
𝑣𝑗,𝑖,𝑔 = 𝑖𝑓 𝑣𝑗,𝑖,𝑔 < 𝑥𝑗𝑙𝑜𝑤
2

(𝑥𝑗𝑢𝑝 + 𝑥𝑗,𝑖,𝑔 )
𝑣𝑗,𝑖,𝑔 = 𝑖𝑓 𝑣𝑗,𝑖,𝑔 > 𝑥𝑗𝑢𝑝
2

where 𝑣𝑗,𝑖,𝑔 and 𝑥𝑗,𝑖,𝑔 are the j-th components of the mutation vector 𝑣𝑖,𝑔 and the
parent vector 𝑋𝑖,𝑔 at generation g, respectively.

This method performs well especially when the optimal solution is located near or
on the boundary.

2.2.Cross-over

After mutation, a ‘binomial’ crossover operation forms the final trial vector

𝑢𝑖,𝑔 = (𝑢1,𝑖,𝑔 , 𝑢2,𝑖,𝑔 , … … … . , 𝑢𝐷,𝑖,𝑔 )

𝑣𝑗,𝑖,𝑔 … … … … … … . . 𝑖𝑓 𝑟𝑎𝑛𝑑𝑗 (0,1) ≤ 𝐶𝑅𝑖 𝑜𝑟 𝑗 = 𝑗𝑟𝑎𝑛𝑑


𝑢𝑗,𝑖,𝑔 = {
𝑥𝑗,𝑖,𝑔 … … … … … … … … … … … … … … … … . . 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

Where,
𝑟𝑎𝑛𝑑𝑗 (𝑎, 𝑏)= uniform random number on the interval (a, b) and newly generated
for each j,
𝑗𝑟𝑎𝑛𝑑 = 𝑗𝑟𝑎𝑛𝑑𝑖𝑎𝑛𝑡 (1, D) = integer randomly chosen from 1 to D and newly generated
for each i, the crossover probability, 𝐶𝑅𝑖 ∈ [0,1], roughly corresponds to the
average fraction of vector components that are inherited from the mutation vector.

In classic DE, 𝐶𝑅𝑖 = CR is a single parameter that is used to generate all trial vectors,
while in many adaptive DE algorithms, each individual I is associated with its own
crossover probability 𝐶𝑅𝑖 .

5
2.3.Selection
The selection operation selects the better one from the parent vector 𝑋𝑖,𝑔 and the
trial vector 𝑢𝑖,𝑔 according to their fitness values f (・). For example, if we have a
minimization problem, the selected vector is given by

𝑢𝑖,𝑔 … … … … … … . . 𝑖𝑓 𝑓(𝑢𝑖,𝑔 ) < 𝑋𝑖,𝑔


𝑋𝑖,𝑔+1 = {
𝑋𝑖,𝑔 … … … … … … . … … . . 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

and used as a parent vector in the next generation.

The above one-to-one selection procedure is generally kept fixed in different DE


algorithms, while the crossover may have variants other than the binomial operation
in. Thus, a DE algorithm is historically named, for example, DE/rand/1/bin
connoting its DE/rand/1 mutation strategy and binomial crossover operation.

Application of DE for problem


Single objective problem
The welded beam shown in Figure is designed for minimum cost subject to constraints on shear
stress in weld (t ), bending stress in the beam (s), buckling load on the bar (P), end deflection
of the beam (d), and side constraints.

6
7
Instructions for using the program
Variable definition
n = population size
itermax = maximum number of iterations
F = Scaling factor
Cr = Crossover rate
Lb = Lower bound
Ub = Upper bound
d = no. of variables
Sol = randomly generated population
Best = best member
fmin = minimum value of the objective function
V = mutated vector
g = constraints

Steps during the program


1. Initialize all the parameters required to run the code i.e., n, Lb, Ub, itermax, d, etc.
2. After initialization has been done a random population is generated between the upper
and lower bounds.
3. The constraints are evaluated and the objective function obtained for the entire
population
4. The main part of the program consists of the mutation, crossover and selection which
is part of the code and it runs to give the best member of the iteration and the best global
member also.
5. During the main program two random vectors are added together with a scaling factor
F and this is added to the best member vector calculated and a bound check is imposed.
6. After the mutation process takes place the crossover of the vectors to produce a trial
vector starts.
7. The trial vector is obtained and is compared with the present best member and if found
to be better it is selected in to the new population for next and further iterations.
8. Finally the results of the best member and the function value is obtained and tabulated.

Results of the program


The following table is generated for 500 iterations:

8
We see that the value obtained for F = 0.5 and CR = 0.95 is the lowest i.e. fmin = 1.7251

9
Figure 2: Showing fmin for F = 0.5 and CR= 0.8

Problem 2
Multi-objective optimization
The optimal design of the three-bar truss shown in Figure is considered using two different
objectives with the cross-sectional areas of members 1 (and 3) and 2 as design variables.
ASSUMING VALUE OF H AS 1,

10
11
Instructions for using program
Variable definition
n = population size
itermax = maximum number of iterations
F = Scaling factor
Cr = Crossover rate
Lb = Lower bound
Ub = Upper bound
d = no. of variables
Sol1 = randomly generated population for population 1
Best1 = best member for population 1
fmin1 = minimum value of the objective function for population 1
V1 = mutated vector for population 1
g1 = constraints for population 1
Sol2 = randomly generated population for population 2
Best2 = best member for population 2
fmin2 = minimum value of the objective function for population 2
V2 = mutated vector for population 2
g2 = constraints for population 2

Steps during the program


1. Initialize all the parameters required to run the code i.e., n, Lb, Ub, itermax, d, etc.
2. After initialization has been done a random population is generated between the upper
and lower bounds.
3. The constraints are evaluated and the objective function obtained for the entire
population
4. The main part of the program consists of the mutation, crossover and selection which
is part of the code and it runs to give the best member of the iteration and the best global
member also.
5. During the main program two random vectors are added together with a scaling factor
F and this is added to the best member vector calculated and a bound check is imposed.
6. After the mutation process takes place the crossover of the vectors to produce a trial
vector starts.
7. The trial vector is obtained and is compared with the present best member and if found
to be better it is selected in to the new population for next and further iterations.
8. Finally the results of the best member and the function value is obtained and tabulated.

12
Results
The results shown below are for 250 iterations and a population size of 2000:

Hence it is seen that not much change is there in the values of fmin for both objective functions
even though the value of CR is changing.

13
14
Figure 3: The figures shows the results for F = 0.5 and CR = 0.8 for both the objective functions

15
Matlab code from Literature
clear all; close all; clc
%Function to be minimized
D=2;
objf=inline('4*x1^2-2.1*x1^4+(x1^6)/3+x1*x2-4*x2^2+4*x2^4','x1','x2');
objf=vectorize(objf);
%Initialization of DE parameters
N=20; %population size (total function evaluations will be itmax*N, must be>=5)
itmax=30;
F=0.8; CR=0.5; %mutation and crossover ratio
%Problem bounds
a(1:N,1)=-1.9; b(1:N,1)=1.9; %bounds on variable x1
a(1:N,2)=-1.1; b(1:N,2)=1.1; %bounds on variable x2
d=(b-a);
basemat=repmat(int16(linspace(1,N,N)),N,1); %used later
basej=repmat(int16(linspace(1,D,D)),N,1); %used later
%Random initialization of positions
x=a+d.*rand(N,D);
%Evaluate objective for all particles
fx=objf(x(:,1),x(:,2));
%Find best
[fxbest,ixbest]=min(fx);
xbest=x(ixbest,1:D);
%Iterate
for it=1:itmax;
permat=bsxfun(@(x,y) x(randperm(y(1))),basemat',N(ones(N,1)))';
%Generate donors by mutation
v(1:N,1:D)=repmat(xbest,N,1)+F*(x(permat(1:N,1),1:D)-x(permat(1:N,2),1:D));
%Perform recombination
r=repmat(randi([1 D],N,1),1,D);
muv = ((rand(N,D)<CR) + (basej==r)) ~= 0;
mux = 1-muv;
u(1:N,1:D)=x(1:N,1:D).*mux(1:N,1:D)+v(1:N,1:D).*muv(1:N,1:D);
%Greedy selection
fu=objf(u(:,1),u(:,2));
idx=fu<fx;
fx(idx)=fu(idx);
x(idx,1:D)=u(idx,1:D);
%Find best
[fxbest,ixbest]=min(fx);
xbest=x(ixbest,1:D);
end %end loop on iterations
[xbest,fxbest]

16
References
[1] Godfrey C.Onwubolu and Donald Davendra “Differential Evolution: A Handbook for
Global Permutation-Based Combinatorial Optimization” Studies in Computational
Intelligence, Volume 175.

[2] Kenneth V. Price · Rainer M. Storn Jouni A. Lampinen “Differential Evolution: APractical
Approach to Global Optimization”.

[3] Panos M. Pardalos (University of Florida), Ding-Zhu Du (University of Texas at Dallas)


“DIFFERENTIAL EVOLUTION In Search of Solutions”

[4] Vasan Arunachalam “Optimization Using Differential Evolution” Water Resources


Research Report. Report No: 060, Date: July 2008.

[5] Feifei Zheng, Angus R. Simpson and Aaron C. Zecchin “A coupled binary linear
programming-differential evolution algorithm approach for water distribution system
optimization” doi: 10.1061/(ASCE)WR.1943-5452.0000367.

[6] Wenyin Gong, Zhihua Cai, Li Zhu “An Efficient Multiobjective Differential Evolution
Algorithm for Engineering Design”

17

View publication stats

Anda mungkin juga menyukai