¾ Introduction
¾ An Example
¾ Some Developments
¾ Research Issues
2
Optimization Methods
¾ Deterministic and
¾ Probabilistic
3
Deterministic Method
Merits
¾ Give exact solutions
¾ Do not use any stochastic technique
¾ Rely on the thorough search of the feasible domain.
Demerits
¾ Not Robust- can only be applied to restricted class of
problems.
¾ Often too time consuming or sometimes unable to
solve real world problems.
4
Probabilistic Method
Merits
• Applicable to wider set of problems i.e. function need not be
convex, continuous or explicitly defined
• Use the stochastic or probabilistic approach i.e. random
approach
Demerits
¾ Converges to the global optima probabilistically
¾ Some times get stuck at local optima.
5
Some Existing Probabilistic Methods
6
Why PSO for Optimization ?
¾ Search speed
7
Particle Swarm Optimization Inspiration
Artificial Life
8
Inspiration cont..
9
Inventors
Developed in 1995 by
¾ Prof. James Kennedy (Right)
¾ Prof. Russel Eberhart (Left)
10
¾ PSO uses a population of individuals, to search
feasible region of the function space. In this
context, the population is called swarm and the
individuals are called particles.
11
Update Equations
12
Updated Velocity
rand (0,1), to stop the swarm converging too quickly
13
PSO Parameters
1. The number of particles :
20 – 40 particles. For most of the problems 10
particles are large enough to get good results.
2. Dimension of particles :
It is determined by the problem to be optimized.
3. Range of particles :
It is also determined by the problem to be optimized,
we can specify different ranges for different
dimension of particles.
14
4. Vmax :
This is done to help keep the swarm under control.
we set the range of the particle as the Vmax.
e.g. X belongs [-10, 10], then Vmax = 20.
One another approach is
Vmax= ⎣(UpBound – LoBound)/5⎦
5. Learning/Acceleration factors :
c1 and c2 usually equal to 2. However, other settings
were also used in different papers. But usually c1
equals to c2 and ranges from [0, 4].
6. The stopping criteria :
The maximum number of iterations the PSO execute
and the minimum error requirement.
Basic Flow of PSO
16
An Example
17
Two Versions of PSO
18
BINARY PSO
19
BINARY PSO
1 0.8
sigm( x) = 0.6
1 + exp(− x)
0.4
0.2
0
-6 -4 -2 0 2 4 6
20
Velocity and Position Update
21
No Free Lunch Theorem
22
Important Developments
23
A Brief Review
24
Inertia Weight
25
Why Inertia Weight
¾ When using PSO, it is possible for the magnitude of
the velocities to become very large.
26
Constriction Factor
27
Fully Informed PSO
⎡ ⎤
v = χ * ⎢v + ∑ ci ri {p (i ) − current (i )}⎥
⎣ i∈N i ⎦
28
Stagnation
v = w * v + c1r1 ( pbest − current ) + c2 r2 ( gbest − current )
v = χ * v + c1r1 ( pbest − current) + c2 r2 ( gbest − current)
¾PSO algorithm performs well in the early stage, but easily
becomes premature in the local optima area.
¾The velocity is only related with inertia weight and
constriction factor
¾If the current position of a particle is identical with the global
best position and if the current velocity is a small value, the
velocity in next iteration will be smaller. Then the particle will
be trapped in this area which leads to premature convergence.
¾This phenomenon is known as stagnation
29
Hybrid Particle Swarm Optimizer with
Mutation (HPSOM).
HPSOM has the potential to escape from the local
optimum and search in a new position. The mutation
scheme randomly chooses a particle and then move to
a different position in search area. The operation
shows as follows:
mut ( x id ) = x id + ∆ x , rand () < 0 . 5
mut ( x id ) = x id − ∆ x , rand () > 0 . 5
∆x is randomly obtained from
[ 0 , 0 . 1 × (max range ( d ) − min range ( d ))]
PSO
31
32
qPSO:Quadratic Approximation (QA)
R1 Æ Particle with best fitness value
⎛
R = 0.5 * ⎜⎜
* ( R2
2
− R3
2
) f ( R1 ) (
+ R3
2
− R1
2
)f ( R2 ) (
+ R1
2
− R 2
)
2 f (R3 )
⎞
⎟⎟
⎝ ( R2 − R3 ) f ( R1 ) + (R3 − R1 ) f (R2 ) + (R1 − R2 ) f (R3 ) ⎠
33
The Process of Hybridization
Figure 4.1: Transition from ith iteration to i+1th iteration
s1 s'1
s2 s'2
- -
- -
- PSO - PSO
sp s'p
Particle
Index
qPSO qPSO
sp+1 s'p+1
sp+2 s'p+2
- -
- QA - QA
sm s'm
Yes
Stopping Criterion
Satisfied?
No
ITER =ITER + 1
For S1 For S2
Determine pbest
and gbest (=
GBEST) Is it possible to No
Determine R1, R2 and
R3 such that atleast two
of them are distinct?
Velocity Update
Yes
¾Hybridization
¾Parallel Implementation
¾New Variants
¾modification in Velocity Update Equation
¾Introduce some new operators in PSO
¾Discrete Particle Swarm Optimization
¾Interaction with biological intelligence
¾Convergence Analysis 36
Some Unsolved Issues
• Convergence analysis.
• Dealing with discrete variables.
• Combination of various PSO techniques
to deal with complex problems.
• Interaction with biological intelligence.
• Cryptanalysis.
37