Anda di halaman 1dari 27

Algorithm and Implementation of

particle swarm
optimization
Yilin Dai, Chao Liang, Wen Zhang

Particle Swarm Optimization (PSO)

Particle swarm optimization (PSO) is a population


based stochastic optimization technique, inspired by
social behavior of bird flocking or fish schooling.
Individual swarm members can profit from the
discoveries and previous experience of all other
members of the school
PSO does not require that the optimization problem
be differentiable as is required by classic optimization
methods
Kennedy, J. and Eberhart, R.: Particle Swarm Optimization. Proceedings
of the Fourth IEEE International Conference on Neural Networks,
Perth, Australia. IEEE Service Center 1942-1948, 1995.

PSO - General concept

PSO consists of a swarm of particles


Each particle resides at a position in the search
space
The fitness of each particle represents the quality of
its position
The particles fly over the search space with a certain
velocity
The velocity (both direction and speed) of each
particle is influenced by its own best position found
so far and the best solution that was found so far by
its neighbors
However, PSO do not guarantee an optimal solution
is ever found

Original PSO - Notation

x := position
v := velocity
p := best position that it has found so far
g := the best position that has been found so far in
its neighborhood
U(0,c1) a random vector uniformly distributed in
[0,c1]
denotes the element-wise multiplication operator

Kennedy, J. and Eberhart, R.: Particle Swarm Optimization. Proceedings


of the Fourth IEEE International Conference on Neural Networks,
Perth, Australia. IEEE Service Center 1942-1948, 1995.

Original PSO (oPSO)

Randomly initialize particle positions and velocities


While not terminate

For each particle i :

Evaluate the desired optimization fitness function


Compare it with its local best (pBest), update if necessary
Compare it with the global best (gBest), update if necessary

For each particle, update its velocity and position:

Momentum

Cognitive component

Social component

Original PSO - Problems

Higher acceleration coefficients result in less stable


systems in which the velocity has a tendency to
explode
limiting the velocity does not necessarily prevent
particles from leaving the search space, nor does it
help to guarantee convergence
the velocity is usually kept within the range (Vmax)

the position is usually kept within the range

Inertia weighted PSO (wPSO)

this update rule allows for convergence without the use of velocity
range(Vmax)
Rule-of-thumb settings: = 0.7298and c1 = c2 = 1.49618

Ref: Shi, Y. Eberhart, R., 'A modified particle swarm optimizer', in Evolutionary
Computation Proceedings, 1998. IEEE World Congress on Computational
Intelligence., The 1998 IEEE International Conference on , pp. 69-73 (1998).

Constricted coefficients PSO (ccPSO)

An elegant method for preventing explosion, ensuring


convergence and eliminating the parameter Vmax

Clerc, M. Kennedy, J., 'The particle swarm - explosion, stability, and


convergence in a multidimensional complex space', Evolutionary
Computation, IEEE Transactions on , vol. 6, no. 1, 58-73 (2002).

Our PSO (logitPSO)

1 e

N k
2

1 - 2

1, 1 2

2

1 2
2 - 4

Testing problems

2D SIAM [-0.1, 0.1]^2

2D Rosenbrock [-10, 10]^2 (1 local min)

Multimodel function (6 local min)


Goal-1.864183379308362
Unimodel function
X=1,1, f=0

High D Rosenbrock [-10, 10]^k

oPSO-2D SIAM
c1

c2

0.5

(0.073989, 0.047033)

-1.734773

0.5

(0.072733, 0.046374)

-1.734772

(0.075191, -0.093183)

-1.864183

(-0.023386, 0.047773)

-1.861672

(0.074477, 0.046680)

-1.734773

oPSO-2D Rosenbrock
C1

C2

Steps

0.001

1.848079

3.464551

0.960840

500

0.001

1.055420

1.073297

0.168034

500

0.001

0.001

0.943059

0.891614

0.003750

500

0.001

0.002

1.011934

1.029707

0.003388

500

0.002

0.001

1.114412

1.242337

0.013108

500

wPSO-2D SIAM
w

c1=c2 x

No. of
Iteration

0.7298

1.4961
8

(0.073584629022878, -1.734773021811118
0.046124034082284)

500

0.5

1.4961
8

(0.022531759829990, -1.734773021811118
0.005818958450126)

500

0.7

1.4961
8

(0.027244766510358, -1.864183379308362
-0.088923333425637)

500

0.9

1.4961
8

(0.026328347338718, -1.864183379307904
-0.049650109134200)

500

wPSO-2D Rosenbrock
w

C1=C2 x

Steps

0.3(no vmax) 1.496


18

1.050740930
664612

1.10404993
0632629

0.002574646 500
364804

0.5(no vmax) 1.496


18

1.000000002
032773

1.00000000
4192520

5.744383999 500
612768e018

0.7298(no
vmax)

1.496
18

1.000000221
024119

1.00000044
3790460

4.915517809 500
560085e014

0.9(no vmax) 1.496


18

0.855483014
525343

0.72847737
6134214

0.022023419 500
836644

ccPSO-2D SIAM
c1

c2

No. of
Iteration

( 0.074587999105150
-0.092084380747153)

-1.864183379308362

85

(0.073584628703011
0.046124034340407)

-1.734773021811117

58

(0.073589218892025
0.046145655099420)

-1.734771690319168

500

(0.074876073508515
0.019093728765458)

-1.864183379245115

500

ccPSO-2D Rosenbrock
C1

C2

Steps

0.2074

0.0393

0.6296

96

1(no vmax)

0.3522

0.1173

0.4242

61

1.4179

2.0114

0.1747

500

2(no vmax)

2.4258

5.8853

2.0330

44

12

1.5179

2.3042

0.2682

500

3(no vmax)

12

1.3112

1.7205

0.0970

37

LogitPSO-2D SIAM

LogitPSO-2D Rosenbrock

Higher Dimension Problem

Generalized Rosenbrock function


D-1

f ( x) {(1 xi ) 2 100( xi21 xi ) 2 }


i 1

Difficulty is defined by the formula ln( ) , where


is the probability of success by randomly choosing a
position in the search space.
Ref: Page 56 Particle Swarm Optimization Maurice Clerc, 2005

Rosenbrock functions

Ref: Page 57 Particle Swarm Optimization Maurice Clerc, 2005

Result- Rosenbrock function


dimension

Object function

Position

converge

oPSO

5D

0.00027224

(1.0003,1.0002,1.0005,1.000
2, 0.9990)

500*

wPSO

5D

1.5258e-014

(1.0000 1.0000
1.0000 1.0000

1.0000

500*

ccPSO

5D

1.1515

(0.8090 0.6571
0.1899 0.0217)

0.4367

500*

logitPSO

5D

6.5663e-005

(1.0009 1.0018
1.0070 1.0141)

1.0035

440

Result- Rosenbrock function


dimension

Object function

converge

oPSO

10D

1.7866

500*

wPSO

10D

0.047223

500*

ccPSO

10D

0.46757

500*

logitPSO

10D

0.00098987

459

Take home message

Selection of the parameters is problem-dependent,


which is crucial to the behavior of the algorithm.
We tried to develop a novel PSO algorithm,

high dimension optimization problem is hard to solve.

However, Results are hard to compare due to the stochastic


sample.
When search space is huge, PSO loses its power.

In our testing, ccPSO works better for SIAM function


wPSO works better for Rosenbrock function.

demo1

demo2

demo3

demo4

Anda mungkin juga menyukai