Anda di halaman 1dari 4

Particle Swarm Optimization

Particle Swarm Optimization


Mohamed A. El-Sharkawi
Computational Intelligence Applications (CIA)
=
Lab.
Department of EE, Box 352500 Coordination with Direct
University of Washington
Seattle, WA 98195-2500 Communication
elsharkawi@ee.washington.edu
http://cialab.ee.washington.edu

M. A. El-Sharkawi, PSO 2

PSO vs SST Particle Swarm Optimization


• Inventors: James Kennedy and Russell Eberhart
• An Algorithm originally developed to imitate the
motion of a Flock of Birds, or insects
• Assumes Information Exchange (Social
Interactions) among the search agents
• Basic Idea: Keep track of
– Global Best
Single Search PSO – Self Best

M. A. El-Sharkawi, PSO 3 M. A. El-Sharkawi, PSO 4

Component in the
direction of previous motion
How does it work? New Motion

• Problem: Component in the


Find X which minimizes f(X) direction of global best
• Particle Swarm:
– Start: Random set of solution vectors
– Experiment: Include randomness in the choice of Current Component in the
new states. motion direction of personal best
– Remember: Encode the information about good Global best
solutions.
– Improvise: Use the ‘experience’ information to
initiate search in a new regions Personal
Best at previous step
M. A. El-Sharkawi, PSO 5 M. A. El-Sharkawi, PSO 6

1
PSO Modeling
• Each solution vector is modeled as
– The coordinates of a bird or a ‘particle’ in a
‘swarm’ flying through the search space
– All the particles have a non-zero velocity and
thus never stop flying and are always
sampling new regions.
• Each ‘particle’ remembers
– Where the global best and where the local
best are.

M. A. El-Sharkawi, PSO 7 M. A. El-Sharkawi, PSO 8

• The search is guided by


– The collective consciousness of the Particle Swarm Dynamics
non-zero velocity
swarm PS never stop flying
r r r
– Introducing randomness into the x (k + 1) = x (k ) + v (k ) Self consciousness
dynamics in a controlled manner of the swarm
Inertia Controlled randomness
• Particle Swarm Dynamics
r r r r
r r r v (k + 1) = w.v (k ) + r (0, a1 ).( xSelfBest (k ) − x (k ))
x (k + 1) = x (k ) + v (k ) r r
+ r (0, a2 ).( xGroupBest (k ) − x (k ))
r r r r
v (k + 1) = w.v (k ) + r (0, a1 ).( xSelfBest (k ) − x (k )) The collective consciousness
r r of the swarm
+ r (0, a2 ).( xGroupBest (k ) − x (k ))
M. A. El-Sharkawi, PSO 9 M. A. El-Sharkawi, PSO 10

PSO Design Parameters


• where,
– x is a solution vector ‘particle’ and v is the
• a1 and a2
velocity of this particle • w: Should be between [0.9 and 1.2]
– a1 and a2 are two scalars, – High values of w gives a global search
– w is the inertia
– Low values of w gives a local search
– r(0,1) is a uniform random number generator
between 0 and 1 • vmax: To be designed according to the
nature of the search surface.

M. A. El-Sharkawi, PSO 11 M. A. El-Sharkawi, PSO 12

2
Example: Boundary
Identification (Edge detector) Border (Edge) Identification
• To identify a subset of the search space( the Class 1
boundary) with specific value
• Each flock finds one point on that boundary (edge)
• Flocks search sequentially

Class 2

M. A. El-Sharkawi, PSO 13 M. A. El-Sharkawi, PSO 14

Border (Edge) Identification The Art of Fitness Function


• To find points anywhere on the
boundary

Metric: |f(x)-boundary value|

Techniques:
Particle Swarm,
Genetic Algorithm
PSO is faster and
M. A. El-Sharkawi, PSO more accurate 15 M. A. El-Sharkawi, PSO 16

Results - Case 1 The Art of Fitness Function


• Distribute points uniformly on the
boundary
Metric:
|f(x)-boundary value| -
Distance to closest neighbor

(to penalize proximity to neighbors)

M. A. El-Sharkawi, PSO 17 M. A. El-Sharkawi, PSO 18

3
Results - Case 2 The Art of Fitness Function
• Distribute points uniformly on the
boundary close to current state
Metric:
|f(x)-boundary value| -Distance to
closest neighbor + Distance to
current state
(penalize proximity to neighbors,
penalize distance from current
state)

M. A. El-Sharkawi, PSO 19 M. A. El-Sharkawi, PSO 20

Results - Case 3

M. A. El-Sharkawi, PSO 21

Anda mungkin juga menyukai