Anda di halaman 1dari 38

An Overview of

Particle Swarm Optimization

Jagdish Chand Bansal


Mathematics Group
Birla Institute of Technology and Science, Pilani
Email: jcbansal@gmail.com, bits-pilani.ac.in
Overview

¾ Introduction
¾ An Example
¾ Some Developments
¾ Research Issues

2
Optimization Methods

¾ Deterministic and

¾ Probabilistic

3
Deterministic Method

Merits
¾ Give exact solutions
¾ Do not use any stochastic technique
¾ Rely on the thorough search of the feasible domain.

Demerits
¾ Not Robust- can only be applied to restricted class of
problems.
¾ Often too time consuming or sometimes unable to
solve real world problems.
4
Probabilistic Method

Merits
• Applicable to wider set of problems i.e. function need not be
convex, continuous or explicitly defined
• Use the stochastic or probabilistic approach i.e. random
approach

Demerits
¾ Converges to the global optima probabilistically
¾ Some times get stuck at local optima.

5
Some Existing Probabilistic Methods

¾ Simulated Annealing (SA)


¾ Random Search Technique (RST)
¾ Genetic Algorithm (GA)
¾ Memetic Algorithm (MA)
¾ Ant Colony Optimization (ACO)
¾ Differential Evolution (DE)
¾ Particle Swarm Optimization (PSO)

6
Why PSO for Optimization ?

¾ Continuous Optimization Problem


Non-differentiable,
Non-Convex
Highly nonlinear
Many local-optima

¾ Discrete Optimization Problem


NP-Complete problems: Nobody has found so far any
good algorithm for any problem in this class

¾ Search speed

7
Particle Swarm Optimization Inspiration

Artificial Life

The term artificial life is used to describe research into


human made systems that possess some of the
essential properties of life. A-life includes two folded
research:

¾ A-life studies how computational techniques


can help studying biological phenomena

¾ A-life studies how biological techniques can


help out with computational problem

8
Inspiration cont..

Based on bird flocking or fish schooling and


swarming theory of A-Life.

About fish schooling: “In theory at least,


individual members of the school can
profit from the discoveries and previous
experience of all other members of the
school during the search for food “.
(a sociobiologist E. O. Wilson)

This is the basic concept behind PSO.

9
Inventors

Developed in 1995 by
¾ Prof. James Kennedy (Right)
¾ Prof. Russel Eberhart (Left)

10
¾ PSO uses a population of individuals, to search
feasible region of the function space. In this
context, the population is called swarm and the
individuals are called particles.

¾ Though the PSO algorithm has been shown to


perform well, researchers have not been able to
explain fully how it works yet.

11
Update Equations

Each particle tries to modify its current position and


velocity according to the distance between its current
position and pbest, and the distance between its current
position and gbest.

12
Updated Velocity
rand (0,1), to stop the swarm converging too quickly

Current Velocity Acceleration factors, can be used to


change the weighting between
personal and population experience

v = v + c1r1 ( pbest − current) + c2 r2 ( gbest − current)


This is the cognitive component This is the social component where
which draws individuals back to individuals compare themselves to
their previous best situations. others in their group.

Velocity Update Equation


(Rate of Change in Particle’s Position)

current = current + v Position Update Equation

13
PSO Parameters
1. The number of particles :
20 – 40 particles. For most of the problems 10
particles are large enough to get good results.
2. Dimension of particles :
It is determined by the problem to be optimized.
3. Range of particles :
It is also determined by the problem to be optimized,
we can specify different ranges for different
dimension of particles.
14
4. Vmax :
This is done to help keep the swarm under control.
we set the range of the particle as the Vmax.
e.g. X belongs [-10, 10], then Vmax = 20.
One another approach is
Vmax= ⎣(UpBound – LoBound)/5⎦

5. Learning/Acceleration factors :
c1 and c2 usually equal to 2. However, other settings
were also used in different papers. But usually c1
equals to c2 and ranges from [0, 4].
6. The stopping criteria :
The maximum number of iterations the PSO execute
and the minimum error requirement.
Basic Flow of PSO

1. Initialize the swarm from the solution space


2. Evaluate fitness of individual particles
3. Modify gbest, pbest and velocity
4. Move each particle to a new position.
5. Go to step 2, and repeat until convergence or a
stopping condition is satisfied.

16
An Example

Understanding of Step by step


Procedure of PSO

17
Two Versions of PSO

¾ gbest PSO - global version is


faster but might converge to
local optimum for some
problems.

¾ lbest PSO - local version is a


little bit slower but not easy to
be trapped into local optimum.

One can use global version to get quick result and


use local version to refine the search

18
BINARY PSO

¾ This version has attracted much lesser attention as


compared to PSO

¾ Particle position is not a real value, but either 0 or 1

¾ Velocity represents the probability of a bit to take the


value 0 or 1 not the rate of change in particle’s position
as in PSO for continuous optimization

19
BINARY PSO

¾The particle’s position in a dimension is


randomly generated using sigmoid function
1

1 0.8

sigm( x) = 0.6

1 + exp(− x)
0.4

0.2

0
-6 -4 -2 0 2 4 6

20
Velocity and Position Update

vid = vid + c1r1 ( pid − xid ) + c 2 r2 ( p gd − xid )

⎧1 if rand () < sigm (vid )


xid = ⎨
⎩0 otherwise

21
No Free Lunch Theorem

• In a controversial paper in 1997 (available at


AUC library), Wolpert and Macready proved
that “averaged over all possible problems or
cost functions, the performance of all search
algorithms is exactly the same”

• No algorithm is better on average than blind


guessing

22
Important Developments

Almost all modifications vary in some


way the velocity update equation.

23
A Brief Review

ƒ PSO-W : With Inertia Weight


ƒ PSO-C : With Constriction Factor
ƒ FIPSO : Fully Informed PSO
ƒ HPSOM : Hybrid PSO with Mutation
ƒ MeanPSO : Mean PSO
ƒ qPSO : Quadratic approximation PSO

24
Inertia Weight

Shi and Eberhart introduced the inertia weight w in the


algorithm (PSO-W).
Then the iterative expression becomes:

v = w * v + c1r1 ( pbest − current) + c2 r2 ( gbest − current)


current = current + v
w represents the inertia weight,
which enhances the exploration ability of particles

25
Why Inertia Weight
¾ When using PSO, it is possible for the magnitude of
the velocities to become very large.

¾ Performance can suffer if Vmax is inappropriately set.

¾ For controlling the growth of velocities a dynamically


adjusted or constant inertia weight were introduced.

¾ Larger w - greater global search ability

¾ Smaller w - greater local search ability.

26
Constriction Factor

Clerc and Kennedy proposed that the constriction factor is


effective for the algorithm to converge (PSO-C)

v = χ * (v + c1r1 ( pbest − current) + c2 r2 ( gbest − current))


current = current + v
χ=
2
φ = c1 + c 2 > 4
2 − φ − φ (φ − 4 )

27
Fully Informed PSO

A particle is attracted by every other particle in its


neighborhood.

⎡ ⎤
v = χ * ⎢v + ∑ ci ri {p (i ) − current (i )}⎥
⎣ i∈N i ⎦

28
Stagnation
v = w * v + c1r1 ( pbest − current ) + c2 r2 ( gbest − current )
v = χ * v + c1r1 ( pbest − current) + c2 r2 ( gbest − current)
¾PSO algorithm performs well in the early stage, but easily
becomes premature in the local optima area.
¾The velocity is only related with inertia weight and
constriction factor
¾If the current position of a particle is identical with the global
best position and if the current velocity is a small value, the
velocity in next iteration will be smaller. Then the particle will
be trapped in this area which leads to premature convergence.
¾This phenomenon is known as stagnation
29
Hybrid Particle Swarm Optimizer with
Mutation (HPSOM).
HPSOM has the potential to escape from the local
optimum and search in a new position. The mutation
scheme randomly chooses a particle and then move to
a different position in search area. The operation
shows as follows:
mut ( x id ) = x id + ∆ x , rand () < 0 . 5
mut ( x id ) = x id − ∆ x , rand () > 0 . 5
∆x is randomly obtained from
[ 0 , 0 . 1 × (max range ( d ) − min range ( d ))]

This mutation operation is governed by a constant called probability of


mutation
30
MeanPSO
MeanPSO
gbest − pbest gbest + pbest
1.47 1 . 47

current pbest gbest Solution


0

PSO

31
32
qPSO:Quadratic Approximation (QA)
R1 Æ Particle with best fitness value

R2 and R3 Æ Randomly chosen distinct particles


R = 0.5 * ⎜⎜
* ( R2
2
− R3
2
) f ( R1 ) (
+ R3
2
− R1
2
)f ( R2 ) (
+ R1
2
− R 2
)
2 f (R3 )

⎟⎟
⎝ ( R2 − R3 ) f ( R1 ) + (R3 − R1 ) f (R2 ) + (R1 − R2 ) f (R3 ) ⎠

Where f(Ri) Æ is the objective function value at Ri , for i=1, 2, and 3.

The calculations are to be done component wise to obtain R* ..

33
The Process of Hybridization
Figure 4.1: Transition from ith iteration to i+1th iteration

s1 s'1
s2 s'2
- -
- -
- PSO - PSO
sp s'p

Particle
Index
qPSO qPSO
sp+1 s'p+1
sp+2 s'p+2
- -
- QA - QA
sm s'm

ith iteration i+1th iteration

The percentage of swarm which is to be updated by QA is called Coefficient


of Hybridization (CH)
34
Start

Random Swarm Flowchart


ITER =0 of qPSO Process
Evaluate Objective Function Value of all
Particles and Determine GBEST

Yes
Stopping Criterion
Satisfied?

No
ITER =ITER + 1

Split Swarm S into subswarms S1 and S2

For S1 For S2

Determine pbest
and gbest (=
GBEST) Is it possible to No
Determine R1, R2 and
R3 such that atleast two
of them are distinct?
Velocity Update

Yes

Determine R1 (= GBEST), R2 and


R3
Position Update
using PSO Position Update using QA

Evaluate Objective Function Value


of all Particles and Determine
GBEST

Report Best Particle


35
End
Research Issues

¾Hybridization
¾Parallel Implementation
¾New Variants
¾modification in Velocity Update Equation
¾Introduce some new operators in PSO
¾Discrete Particle Swarm Optimization
¾Interaction with biological intelligence
¾Convergence Analysis 36
Some Unsolved Issues

• Convergence analysis.
• Dealing with discrete variables.
• Combination of various PSO techniques
to deal with complex problems.
• Interaction with biological intelligence.
• Cryptanalysis.

37

Anda mungkin juga menyukai