Anda di halaman 1dari 6

Particle Swarm Optimization Algorithm Based on Dynamic Memory Strategy

Qiong Chen Shengwu Xiong

School of Computer Science and Technology, Wuhan University of Technology, Wuhan 430070, China

ch-chong@hotmail.com

School of Computer Science and Technology, Wuhan University of Technology, Wuhan 430070, China

xiongsw@whut.edu.cn

School of Computer Science and Technology, Wuhan University of Technology, Wuhan 430070, China

Hongbing Liu

liuhbing@sohu.com

ABSTRACT
This paper mainly studies the inuence of memory on individual performance in particle swarm system. Based on the observation of social phenomenon from the perspective of social psychology, the concept of individual memory contribution is dened and several measurement methods to determine the level of eect of individual memory on its behavior are discussed. A dynamic memory particle swarm optimization algorithm is implemented by dynamically assigning appropriate weight to each individuals memory according to the selected metrics values. Numerical experiment results on benchmark optimization function set show that the proposed scheme can eectively adjust the weight of individual memory according to dierent optimization problems adaptively. Numerical results also demonstrate that dynamic memory is an eective improvement strategy for preventing premature convergence in particle swarm optimization algorithm.

Categories and Subject Descriptors


I.2.8 [Articial Intelligence]: Problem Solving, Control Methods, and SearchHeuristic methods

General Terms
Algorithms

Keywords
particle swarm optimization, dynamic memory strategy, individual memory weight

1.

INTRODUCTION

The purpose of swarm intelligence (SI) algorithm is to nd better solutions of specic problems through swarm collaboration. Particle swarm optimization (PSO) is populationThe corresponding author.

based optimization originally developed by Kennedy and Eberhart[9]. PSO algorithm simulates the cooperation phenomenon in human society[10]. Each of individual represents a solution in search space. Ultimate goal of PSO is not to improve all of individuals to make the whole group reach optimum state, but to select individual with the best performance through some kind of information sharing among all individuals. The problem-solving capability of selected individual means the success or failure of particle swarm. It seems that judging the nal performance of particle swarm has no direct relation to other individuals ability. Thus it is necessary to research a single individuals behavior in particle swarm from the microscope point of view. On the one hand, individuals in particle swarm inuence each other in one way or another. An individual would imitate the behavior of individuals directly related to itself, and the individual with the best nal performance is no exception. Therefore, the research on improving the performance [5, 3, 13, 6] of all individuals is also of great signicance to the performance of the whole particle swarm. On the other hand, the individual with the best nal performance is in fact a member of the whole particle swarm. At the initial stage of particle swarm, it is dicult to judge which individual will become the ultimate winner. Actually, the research of all individuals on the same level is also the research of the ultimate winner. The standard PSO [9]: Each individual in PSO ies in the search space with a velocity that is dynamically adjusted according to its own ying experience and its companions ying experience. The updates of the particles are accomplished according to the following equations. Equation (1) calculates a new velocity for each particle based on its previous velocity (vi ), the particles location at which the best tness has been achieved (pi , or lbest ) so far, and the best particle location among the neighbors (pnbrk , or gbest ) at which the best tness has been achieved so far. Equation (1) updates each particles position in the solution hyperspace. The two random numbers R1 and R2 are independently generated, while c1 and c2 are two learning factors. The use of the inertia weight w has provided improved performance in a number of applications [2]. vi = w vi + c1 R1 (pi xi ) + c2 R2 (pnbr(k) xi ) xi = xi + vi (1) This paper focuses on discussing individual memory contribution to the individual success by analyzing the individual behavior[11] in particle swarm system. Based on the

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specic permission and/or a fee. GEC09, June 1214, 2009, Shanghai, China. Copyright 2009 ACM 978-1-60558-326-6/09/06 ...$5.00.

55

analysis, an improved PSO algorithm is proposed on the basis of the dynamic memory strategy [7, 1, 8] . Finally, experimental results on benchmark testing functions are used to verify the eectiveness and feasibility of the proposed algorithm. The paper gives the convergence curves to further explain the performance of the proposed algorithm.

2.

INDIVIDUAL MEMORY CONTRIBUTION

If the contribution of individual memory to individual success is able to be quantized, then this value can be called individual memory contribution. In this way, in one particle swarm each individuals individual memory contribution in its life cycle is not xed, and not all individuals individual memory contribution are the same. The following is the formal denition of individual memory contribution: Assuming p S is an individual of the particle swarm, and S is this particle swarm. To a certain extent, the individual memory contribution of particle p measures the importance of its past successful experience and is represented as C (p). If C (p1 ) > C (p2 ), then it is regarded that particle p1 should put more emphasis on its past successful experience than particle p2 . The weight of lbest is arranged in terms of the individual memory distribution. We rstly dene a unied mechanism to measure individual memory contribution which can be represented as the form of real number, so we could sort individuals of the particle swarm according to the value of individual memory contribution. In most evolutionary computational methods, the strength of the individual is related to the individuals tness[12]. In this paper,f is the tness function which directly calculates particles tness and has greater values for optimal solutions in evolutionary computation. There are several dierent methods to measure the contribution: Success in one iteration. this is the measurement by which the best solution is found for the particular particle. C (pt ) = f (pt ) Success in the whole iteration. this approach is the same as success in one iteration, and is used to compute the average value of the whole iteration. C (pt ) = C (pt1 ) + (1 )f (pt ) Where 0 1. This approach has the advantage of diluting early iterations inuence. In such a way, even if the particle is very successful in early phase, but if it couldnt always maintain this successful state, its individual memory contribution will be gradually reduced. Eciency in the last two iterations. A particles eciency can be measured by the improvement scope of the tness of the best solutions found in the last two iterations, f (pt ) f (pt1 ) C (pt ) = f (pt ) This approach shows that the individual of higher eciency has greater individual memory contribution. Eciency in the whole iteration. Similar to the success measurement, this approach measures a particles eciency throughout the whole iteration. It can be calculated through the following formula: C (pt ) = C (pt1 ) + (1 ) f (pt ) f (pt1 ) f (pt )

where 0 1. Its a new idea to use eciency as a standard of measuring individual memory contribution. Under this standard, the individual with the biggest individual memory contribution is not the one with the best performance, but the promising one with the greatest improvement scope. In this way, we can treat each individual from a developmental point of view. If it performs well, but has no improvement for a long time, then it is likely to trap into the beautiful memories of the past and can not continue to move forward. At this time, its individual memory contribution that should be reduced in order to continue to search higher success point.

3.

INDIVIDUAL MEMORY WEIGHT ADJUSTMENT STRATEGIES

Corresponding individual memory weight assigned for each individuals individual memory contribution is denoted M . This paper adopts the following methods to design strategies of dynamically adjusting individual memory weight. Namely, the individual whose individual memory contribution meets the qualications is selected according to certain rules and assigned the xed individual memory weight, and the weights of the rest individuals are set to zero, in the other words, the rest individuals are deprived of weights. Wherein, the two most important tasks are establishment of an appropriate rule, that is to say how to give or deprive of individual memory weight in accordance with the selection of individual memory contribution, and decision of a xed value for individual memory weight. The following uniform distribution formula is used to calculate individual memory weight. M = U 0, max |N | + 1 (2)

Where, N is this individuals neighborhood scale, that is the number of individuals that can aect it, max = 4.1, because Clercs analysis is not restricted to the sum of these two parts[4]. Due to the randomness of particle swarm initialization, many individuals have inherent advantages. In order to be fair, only an individual has continuous progress can it have the right to enjoy priority. This mechanism forces the individuals in better position to make continuous progress and thus eectively avoids local convergence of particle swarm . This is why we use eciency to measure an individual memory contribution. In consideration of random factors in the iteration process, the average eciency is selected to measure the individual memory distribution, namely: C (pt ) = C (pt1 ) + (1 ) t
f (pt )f (pt1 ) f (pt )

(3)

where 0 < 1 and t is the iteration time. In addition, we use the method of roulette wheel selection to assign individual memory weight to the corresponding individual. In each iteration process, an individuals individual memory weight will be updated. Only the individual that maintains a bigger individual memory contribution can obtain long-term memory individual memory weight. The probability that an individual is attributed the individual memory weight is denoted PM . Where, x is an individual of the particle swarm and S is this particle swarm. Obviously,

56

PM =

C (p) xS C (x)
Ni k=1

(4)

Here, the speed update formula can be redened as: vi (vi + M (pi xi ) + xi xi + vi k (pnbr(k) xi )) (5)

Where, Ni is the neighborhood scale of particle i, and nbr(k) is the k th neighbor of i. M and k are respectively particles individual memory and the weight factor imposed by the k th neighbor. is the shrinkage factor. M U
max |N |

k =

M = 0 M = 0

This paper mainly uses individual memory contribution to improve the PSO algorithm. The basic idea is that the initial particle swarm has no memory. With the individuals learning and growth, some outstanding individuals could refer to their successful experience to guide their behavior. However, this right is not permanent. If one individual has no or little progress, its memory will be deprived of. Such idea is realized by dynamically adjusting the strategies of individual memory weight. The proposed particle swarm optimization algorithm with dynamic memory strategy is described as algorithm 1. Algorithm 1. Particle swarm optimization algorithm with dynamic memory strategy Step1. Randomly initialize each particles position and speed, the individual memory weight is initialized to zero. Step2. Evaluate every particle. Step3. Calculate each particles individual memory contribution according to equation(3). Step4. Calculate PM according to equation(4). Step5. Use PM to assign value of M . Step6. Update each particles speed and position in accordance with equation(5). Step7. If the best tness of swarm meets the stopping criteria or reaches the largest iterations, the program is terminated, otherwise, return to step2.

4.

NUMERICAL EXPERIMENT

To verify the eectiveness of proposed method, our experiments are based on the six optimization functions which are widely used in the PSO research[14]. Wherein, function Schaer-F6 is two-dimension, function Griewank is tendimension, and other functions are all set to thirty-dimension. Note that the ten-dimensional Griewank function is more dicult to optimize than the thirty-dimensional function. The purpose of experiments is to verify the validity of dynamic strategies of individual memory weight. Therefore, in order to exclude the impact of other unknown factors on the

results, we adopt the same parameter settings in all experiments. Numerical experiments mainly test and compare the dierent congurations of particle swarm optimization algorithm in the above six optimization testing functions. And table 1 lists dimensionality, search space, solutions initialization conditions and convergence criteria of the selected standard testing functions. Here we use the asymmetrically solutions initialization method, accelerating factor = 4.1 and shrinkage factor = 0.7298 based on Clercs study results[4]. The test results show that the parameter maximum speed almost has no eect on the performance of the algorithm. The population size is set to 50. The lbest model is chosen as the neighbor topology in all algorithms[9]. The algorithms listed above use the same terminal conditions. For the two measurement standards, two dierent terminal conditions are used. One of the terminal conditions, check point is used to measure the average performance of the algorithm. The best solution is achieved in the check point of every run. In our numerical experiments, the check point is set after 2000 iterations. Because population size is 50, this terminal condition needs to compute 100000 times function values. For the other terminal condition, the maximum iteration generation is adopted to measure algorithms success ratio. After the largest iterations, the program terminates and the nal result of this run can be achieved. Compared with the convergence criterion, we can determine the run is successful or unsuccessful. In our numerical experiments the maximum iteration generation is set to 6000 iterations, that is to say, there are 300,000 function evaluations. Our numerical experiments choose two representative particle swarm optimization algorithm models, such as the classical PSO and FIPS [11], in which the referred individual memory and non-referred individual memory cases are used to test the performance of the proposed strategy. The individual memorys weight and the weight aected by other individuals are set to be equal. The proposed model of particle swarm optimization algorithm with dynamic memory strategy is embedded respectively under the classic PSO and FIPS framework. Table 2 lists the compared result on the selected standard testing functions. Wherein, CPSO denotes classical particle swarm optimization algorithm where the particles imitate the best individual of the neighborhood. FIPS represents the fully communicated particle swarm and the particles are aected by all individuals of the neighbor. NMPSO signies non-memory PSO where the particles are independent of their past experience. NMFIPS represents the lost memory FIPS where the particles are not aected by their past experiences. DMPSO signies PSO with dynamic memory strategy and DMFIPS signies FIPS with dynamic memory strategy. The Variant column represents dierent algorithm implementations. The Performance column represents the average performance in the check point after 50 runs. The Proportion column represents the ratio of run that reaches the convergence criterion in these 50 runs. It can be seen from table 2 that classical particle swarm model with no individual memory performs rather bad, there is no one successful convergence in these 50 runs for testing functions Ackley, Rosenbrock and Sphere. Furthermore, in functions of Ackley and Rastrigin, FIPS model with individual memory also show poor performance, there is no one successful convergence in these 50 runs. The experiment results show that individual memory is relatively important for the classical PSO algorithm model but becomes a sur-

57

plus burden to FIPS model, particularly in the asymmetric initialized solution space. The performance on all testing functions is not always the same. As a result, theoretically it is eective to dynamically adjust individual memory weight in accordance with specic problems. It can also be concluded from the experiment results: DMPSO performs better than CPSO and NMPSO, and DMFIPS performs better than FIPS and NMFIPS. Overall, DMFIPS performs best. This also conrms Mendess research conclusion, that is to say, FIPS is more eective than classic PSO in most issues. The FIPS model with dynamic memory strategy can make up for the FIPS model which is lack of memory, so that it can be applied to a wider range of optimization testing problems. The paper gives the convergence curves to further explain the performance of the proposed algorithm. According to the results of Table 2, the model based on FIPS is more eective and the individual memory in the model is more complex. Therefore, in order to study the contribution of individual memory in the role of PSO algorithm, we choose the three models based on the FIPS algorithm: FIPS, NMFIPS and DMFIPS. Figure 1 to Figure 6 show the convergence curves of three algorithms in the six test functions. From the following six gures, NMFIPS has a greater convergence rate, but has the tendency to fall into local optimum easily. Although the threat of premature convergence is relatively small, FIPS has a smaller convergence rate in most cases as a result of the continuing impact of the individual memory. DMFIPS combines the advantages of both NMFIPS and FIPS, and the dynamic mechanism of DMFIPS can avoid the deciencies that exist between NMFIPS and FIPS. Viewed as a whole, DMFIPS is a better choice.

Table 2: The Role of Individual Memory Contribution of Particle Swarm Functions Variant Performance Prop. CPSO 15.5333.028 0.36 FIPS 20.6410.391 0 Ackley NMPSO 19.4421.137 0 NMFIPS 0.1280.011 1 DMPSO 1.1620.268 0.88 DMFIPS 0.0220.007 1 CPSO 150.79011.836 0.38 FIPS 178.9814.725 0 Rastrigin NMPSO 253.76410.242 0.78 NMFIPS 134.5545.298 0.96 DMPSO 122.3563.254 0.96 DMFIPS 71.8634.836 1 CPSO 3.91E-54.61E-6 0.90 FIPS 4.86E-96.85E-10 0.96 Schaer-F6 NMPSO 5.28E-41.02E-4 0.92 NMFIPS 3.19E-67.63E-7 0.78 DMPSO 9.51E-53.66E-5 0.98 DMFIPS 1.69E-95.02E-10 0.98 CPSO 0.0290.003 0.98 FIPS 0.0140.006 1 Griewank-10 NMPSO 0.9710.302 0.86 NMFIPS 3.44E-46.28E-5 1 DMPSO 0.0120.008 1 DMFIPS 5.05E-82.25E-9 1 CPSO 27.4541.406 1 FIPS 84.0222.857 0.98 Rosenbrock NMPSO 7058512082 0 NMFIPS 26.0800.658 1 DMPSO 26.5560.692 1 DMFIPS 22.2230.084 1 CPSO 6.44E-92.93E-10 0.98 FIPS 2.88E-165.52E-17 1 Sphere NMPSO 31870542 0 NMFIPS 3.11E-111.06E-12 1 DMPSO 9.27E-215.61E-22 1 DMFIPS 1.87E-276.49E-28 1

Figure 1: Convergence curves of Ackley function

5.

CONCLUSION

In particle swarm, the impact of each particles individual memory on its optimization process diers from one another. Some memories have positive impact on individuals optimization performance, while some have negative impact. In addition, the same individual has dierent role at dierent stages in its life cycle, and the strength of eect is also dierent. Based on this conclusion, we put forward the concept of individual memory contribution and give sev-

Figure 2: Convergence curves of Griewank function

58

Functions Sphere Rosenbrock Ackley Rastrigrin Griewank Schaer

Table 1: Settings of Standard Testing Functions Dimensionality Search Space Solutions Initialization space Convergence Criteria 30 (-100,100)D (50,100)D f (x) 0.01 30 (-30,30)D (15,30)D f (x) 100 30 (-30,30)D (15,30)D f (x) 0.01 30 (-5.12,5.12)D (2.56,5.12)D f (x) 100 10 (-600,600)D (300,600)D f (x) 0.05 2 (-100,100)D (50,100)D f (x) 0.00001

Figure 3: Convergence curves of Rastrigin function

Figure 5: Convergence curves of Sphere function

Figure 4: Convergence curves of Rosenbrock function

Figure 6: Convergence curves of Schaer function

59

eral alternative measurement methods to determine the level of eect of individual memory on its behavior. According to this measurement indicator, we design a strategy of dynamically assigning individual memory weight and realize the PSO algorithm with dynamic memory strategy. Numerical experiments indicate that dynamic memory is an eective improvement strategy for PSO algorithm. In the proposed PSO algorithm with dynamic memory strategy, all of four measurement methods are based on individuals performance, and they also can be combined with other factors beyond their own performance to measure individual memory contribution to reect particle swarms population characteristics. This paper only deals with the choice of whether or not assigning weight, and the weight itself is unchanged, and further research work can consider the situation in which weight dynamically changes with the change of individual memory contribution.

[6]

[7]

[8]

6.

ACKNOWLEDGMENTS

[9]

This work was supported in part by NSFC (Grant No. 40701153) and Wuhan International Cooperation and Communication Project (Grant No. 200770834318).

[10] [11]

7.

REFERENCES

[1] C. N. Bendtsen and T. Krink. Dynamic memory model for non-stationary optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, pages 145150. IEEE, May 2002. [2] A. Chatterjee and P. Siarry. Nonlinear inertia variation for dynamic adaptation in particle swarm optimization. Computers and Operations Research, 33(3):859871, 2006. [3] M. Clerc. Discrete particle swarm optimization. In New Optimization Techniques in Engineering, pages 19421948. IEEE, March 2004. [4] M. Clerc and J. Kennedy. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1):5873, 2002. [5] A. EI-Gallad, M. EI-Hawary, A. Sallam, and A. Kalas.

[12]

[13]

[14]

Enhancing the particle swarm optimizer via proper parameters selection. In Proceedings of Canadian Conference on Electrical and Computer Engineering, pages 792797. IEEE, August 2002. H. Y. Fan. A modication to particle swarm optimization algorithm. Engineering Computations, 19(7-8):970989, 2002. X. Hu and R. Eberhart. Multiobjective optimization using dynamic neighborhood particle swarm optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, pages 16771681. IEEE, May 2002. X. Hu and R. C. Eberhart. Adaptive particle swarm optimization: Detection and response to dynamic systems. In Proceedings of the IEEE Congress on Evolutionary Computation, pages 16661670. IEEE, MAY 2002. J. Kennedy and R. C. Eberhart. Particle swarm optimization. In Proceeding of the 1995 IEEE International Conference on Neural Networks, pages 19421948. IEEE, November 1995. B. Latane. The psychology of social impact. American Psychologist, 36(4):343356, 1981. R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: Simple, maybe better. IEEE Transactions on Evolutionary Computation, 8(3):204210, 2004. K. E. Parsopoulos and M. N. Vrahatis. Recent approaches to global optimization problems through particle swarm optimization. Natural Computing, 1(2-3):235306, 2002. Y. Shi and R. C. Eberhart. A modied particle swarm optimizer. In Proceedings of the IEEE Congress on Evolutionary Computation, pages 6973. IEEE, May 1998. I. C. Trelea. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Information Processing Letters, 85(6):317325, 2003.

60

Anda mungkin juga menyukai