Anda di halaman 1dari 6

CINTI 2011 12th IEEE International Symposium on Computational Intelligence and Informatics 2122 November, 2011 Budapest, Hungary

A new method to improve the Particle Swarm


Optimization using Cellular Learning Automata
(CLAPSO)
M. J. Fattahi Hasan Abad*, S. M. Salari **, M. A. Saadatjoo ***
*Department

of Computer, Ashkezar Branch, Islamic Azad University, Yazd, Iran


Department of Computer, Elmi-karbordi university, Yazd, Iran
***University of Debrecen, Faculty of IT, 4033 Debrecen, Hungary
mjfattahi @gmail.com
salari.sm@gmail.com
m.a.saadatjoo@inf.unideb.hu
**

Abstract Particle swarm optimization (PSO) is a


population based statistical optimization technique which
inspired by social behavior of bird flocking or fish
schooling. PSO algorithm has been developing rapidly and
has been applied widely since it was introduced, as it is
easily understood and realized. The main weakness of PSO
especially in multi modal problems is trapping in local
optimums. This paper presents an improved particle swarm
optimization algorithm (CLAPSO) to improve the
performance of standard PSO, which uses the dynamic
inertia weight. Experimental results indicate that the
CLAPSO improves the search performance on the
benchmark functions significantly.
Index Terms Particle Swarm Optimization, Learning
Automata, Cellular Learning Automata.

I.

INTRODUCTION

The particle swarm optimization (PSO) is an


evolutionary computation technique developed by Dr.
Eberhart and Dr. Kennedy in 1995 [1,2], inspired by
social behavior of bird flocking or fish schooling. PSO
has been successfully applied in many areas: function
optimization, artificial neural network training, fuzzy
system control, and other areas where GA can be applied.
Recently, several investigations have been undertaken to
improve the performance of standard PSO and have
obtained rich harvests. Clerc and Kennedy in [4] have
reached the particle swarm explosion stability and
convergence in a multi-dimensional complex space.
Eberhart and Shi have compared the genetic algorithms
with PSO in [4] and have investigated PSO
developments, applications and resources in [5] and have
presented a modified particle swarms optimizer in [6,7].
Fan has presented a modification to PSO algorithm in
[8] and He et al. have put forward a particle swarm
optimizer with passive congregation in [9]. Liu et al.
presented an improved particle swarm optimization
combined with chaos in [10]. Shi et al. in [11] have
presented an improved GA and a novel PSO-GA-based
hybrid algorithm. Kennedy in [12] has studied the PSO
social adaptation of knowledge. Robinson et al. in [13]
have investigated particle swarm, genetic algorithm, and
their hybrids: optimization of a profiled corrugated horn
antenna. Shi and Eberhart have researched parameter

978-1-4577-0045-3/11/$26.00 2011 IEEE

selection in particle swarm optimization, evolutionary


programming in [14]. Kennedy and Mendes in [15]
investigated the impacts of population structures to the
search performance of SPSO. Other investigations on
improving PSOs performance were undertaken using
cluster analysis [16]. Trelea in [17] has researched the
PSO algorithm convergence analysis and parameter
selection. Addition in papers [18,19] authors researched
the chaotic optimisation and their application.
In this paper we propose a dynamic inertia weight PSO
algorithm for optimization problems using Cellular
Learning Automata. Cellular Learning Automata is a
mathematical model for dynamical complex systems that
consists of large number of simple components. The
simple components, which have learning capabilities, act
together to produce complicated behavioral patterns.
More of this report is organized as follows. Section II
introduces the PSO algorithm. Cellular Automata in
Section III and in Section IV Learning Automata briefly
introduced. in Section V Cellular Learning Automata will
be introduced. In Section VI the proposed model is
described. Section VII provides simulation results and the
final section is concluded.
II.

PARTICLE SWARM OPTIMIZATION (PSO)

A. The original PSO


In PSO, each potential solution to an optimization
problem is treated as a bird, which is also called a
particle. The set of particles, also known as a swarm, is
flown through the D-dimensional search space of the
problem. The position of each particle is changed based
on the experiences of the particle itself and those of its
neighbors. The position of the ith particle is presented as
xi = ( xi1 , xi 2 , ... , xiD )
(1)
where x id [ l d , u d ] , d [ 1 , D ] , and l d and u d are
the lower and upper bounds of the dth dimension of the
search space. Similar to the position, the velocity of each
particle is represented with a vector. The ith particle
velocity is presented as vi = (vi1 , vi 2 , ... , viD ) . At each
time step, the position and velocity of the particles are
updated according to the following equations [2]:

247

M. J. Fattahi Hasan Abad et al. A New Method to Improve the Particle Swarm Optimization using Cellular Learning Automata

vij ( t + 1 ) = vij ( t ) + R1ij c1 ( Pij xij (t ) ) +


(2)
R2ij c2 ( Pgj xij (t ) )
xi ( t + 1 ) = xi ( t ) + vi ( t + 1 )
where R 1 ij and R 2 ij are two distinct random values in
[0,1],

c1 and c2 are acceleration constants, Pi is the best

previous position of the particle itself and Pg denotes the


best previous position of all particles of the swarm. If
c1 is set to 0(and c2 0 ), the PSO algorithm turns into
the social-only model and if c2 is set to 0 (and
then it becomes a cognition-only model.

c1 0 ),

B. Inertia weight
The balance between global and local search
throughout the course of a run is critical to the success of
an optimization algorithm [20]. Almost all of the
evolutionary algorithms utilize some mechanisms to
achieve this goal. The step size of the normal mutation in
evolution strategies [21] and temperature parameter in
simulated annealing [22] are two examples of balance
controlling parameters. To bring about a balance between
the exploration and exploitation characteristics of PSO,
Shi and Eberhart proposed a PSO based on inertia weight
in which the velocity of each particle is updated
according to the following equation [6]:

vij ( t + 1 ) = w * vij ( t ) + R1ij c1 ( Pij xij (t ) ) +


R2ij c2 ( Pgj xij (t ) )

(3)

They claimed that a large inertia weight facilitates a


global search while a small inertia weight facilitates a
local search. By changing the inertia weight dynamically,
the search capability is dynamically adjusted. This is a
general statement about the impact of w on PSOs search
behavior shared by many other researchers. However,
there are situations where this rule cannot be applied
successfully.
Moreover, the traditional inertia weight adaptation
mechanism cannot be successfully employed in some of
the new PSO models proposed in the last decade. For
example, in multi-swarm particle swarm optimizers the
main population is divided into a number of sub-swarms
with the hope of exploring different areas of the search
space [23, 24]. While the inertia weight is only adjusted
according to the iteration number of the overall
algorithm, the sub-swarms may be created at any time
during a run of the algorithm and hence require a w value
compatible with their needs.
In addition to the above examples, there are some
optimization problems that show the need for a new
inertia weight adaptation mechanism. For example, in
dynamic environments with moving peaks, the standard
PSO fails to follow the extrema due to the small velocity
values of the particles [25].

III. CELLULAR AUTOMATA


One of the models that are used to develop cellular
evolutionary algorithm is a cellular automaton (CA). A
cellular automaton is an abstract model that consists of
large numbers of simple identical components with a local
interaction. CA is non-linear dynamical systems which
space and time are discrete in. It called cellular, because it
is made up cells like points in the lattice or like squares of
the checker boards and it is called automata, because it
follows a simple rule. The simple components act together
to produce complicate patterns of behavior. CA performs
complex computation with high degree of efficiency and
robustness. It is especially suitable for modeling natural
systems that can be described as massive collections of
simple object interacting locally with each other. Cellular
automaton has not only a simple structure for modeling
complex systems, but also it can be implemented easily on
SIMD processors. Therefore it has been used in
evolutionary computing frequently. Mush literatures are
available on cellular automata and its application to
evolutionary computing, and the interested reader is
referred to [26], [27].
IV.

LEARNING AUTOMATA

Learning Automata [28], [29] are adaptive decisionmaking devices operating on unknown random
environments. The Learning Automaton has a finite set of
actions and each action has a certain probability
(unknown for the automaton) of getting rewarded by the
environment of the automaton. The aim is to learn to
choose the optimal action (i.e. the action with the highest
probability of being rewarded) through repeated
interaction on the system. If the learning algorithm is
chosen properly, then the iterative process of interacting
on the environment can be made to result in selection of
the optimal action.

(n)

Environment

Learning Automata

(n)

Figure 1. The interaction between learning automata and environment

Figure 1 illustrates how a stochastic automaton works


in feedback connection with a random environment.
Learning Automata can be classified into two main
families: fixed structure learning automata and variable
structure learning automata (VSLA). In the following, the
variable structure learning automata is described. A
VSLA is a quintuple <, , p, T(, , p)>, where , , p
are an action set with s actions, an environment response
set and the probability set p containing s probabilities,
each being the probability of performing every action in
the current internal automaton state, respectively. The
function of T is the reinforcement algorithm, which
modifies the action probability vector p with respect to
the performed action and received response. Let a VSLA
operate in an environment with = {0, 1}. Let n N be

248

CINTI 2011 12th IEEE International Symposium on Computational Intelligence and Informatics 2122 November, 2011 Budapest, Hungary

the set of nonnegative integers. A general linear schema


for updating action probabilities can be represented as
follows. Let action i be performed at instance n.

If (n) = 0,
pi (n + 1) = pi (n) + a[1 pi (n)]
p j (n + 1) = (1 a) p j (n)

j j i

If (n) =1,
pi (n +1) = (1b) pi (n)
pj (n +1) = (b s 1) +(1b) pj (n) j j i

(4)

(5)

Where a and b are reward and penalty parameters.


When a = b, automaton is called LRP. If b = 0 and
0<b<<a<1, the automaton is called LRI and LRP,
respectively. Figure 2 show the working mechanism of
learning automata.
Initialize p to [1/s, 1/s,, 1/s] where s is the number of actions
While not done
Select an action i based on the probability vector p
Evaluate action and return a reinforcement signal
Update probability vector using learning rule.
End While
Figure 2. Pseudocode of variable-structure learning automaton.

V.

CELLULAR LEARNING AUTOMATA

Cellular Learning Automata is a mathematical model


for dynamical complex systems that consists of large
number of simple components. The simple components,
which have learning capabilities, act together to produce
complicated behavioral patterns. A CLA is a CA in which
learning automaton (multiple learning automaton) is
assigned to its every cell. The learning automaton
residing in particular cell determines its state (action) on
the basis of its action probability vector. Like CA, there is
a rule that CLA operate under it. The rule of CLA and the
actions selected by neighboring LAs of any particular LA
determine the reinforcement signal to the LA residing in
that cell. In CLA, the neighboring LAs of any particular
LA constitute its local environment, which is
nonstationary because it varies as action probability
vector of neighboring LAs vary.
The operation of cellular learning automata could be
described as follows: At the first step, the internal state of
every cell specified. The state of every cell is determined
on the basis of action probability vectors of learning
automata residing in that cell. The initial value may be
chosen on the basis of experience or at random. In the
second step, the rule of cellular automata determines the
reinforcement signal to each learning automaton residing
in that cell. Finally, each learning automaton updates its
action probability vector on the basis of supplied
reinforcement signal and the chosen action. This process
continues until the desired result is obtained [30][31].

VI.

THE PROPOSED MODEL(CLAPSO)

In our proposed algorithm, in order to balance the


global search and local search we have used a Learning
Automata separately for the Inertia weight parameter for
each particle of the population. So that there is a particle
of the population in each cell of the Cellular Learning
Automata and each cells learning automata is to adjust
the Inertia weight parameter in order to update the
velocity of that particle.
If we show the Inertia weight parameter with w this
Learning Automata has three actions of "Increasing the
amount of w, decreasing the amount of w and to
maintain a constant amount of w. Automata select one
of these actions on each step and the Inertia weight
parameter is modified according to the selected action, so
the particle updates its velocity and position using the
new amount of Inertia weight parameter. At first, the
possibility for the selection of each of the actions by the
Automata is the same and then, considering the selected
action and the feedback it receives from the environment,
the amount of possibility for the selection of actions in
the next steps changes. How this automata works is
shown in Figure 3.
The overall implementation of this algorithm can be
expressed as follows.
Initially, the position and velocity of the particles and
also the possibility vectors of learning automata are
proportioned. Then, until the maximum number of steps
is performed or the goal is achieved, the following steps
are repeated:
1 Learning Automata in each cell select one of its
actions according to its possibility vector.
2 According to the selected action, the Inertia weight
parameter (w) is modified and the particle velocity and its
position are updated.
3 - According to updating results of the particles
position, the learning automata, is evaluated and the
vector for the possibility of learning automata is
modified.

Learning Automata
three actions

Increasing the
amount of w

decreasing the
amount of w

maintain the
amount of w

evaluation

Updating the possibility vector


Figure 3. How the used Learning Automata works

The way of evaluating the selected action is so that in

249

M. J. Fattahi Hasan Abad et al. A New Method to Improve the Particle Swarm Optimization using Cellular Learning Automata

time t, each cell evaluates the i position of itself and


its neighbors (regarding the type of neighborhood
selected) and selects some of them as the superior cells
based on the evaluation function but this is not a mutual
selection way. In other words, if one cell selects another
cell as one of its candidates, there is no guarantee for the
selection of this cell by that one. Now the available
Learning Automata in that cell are given testimonial or
fine according to the functions that the superior cells
learning automata have undertaken.
One of the main advantages of this method is its ability
to escape local optimum, and also its high velocity and
accuracy for the convergence with the answer. In fact, it
tries to supply a global search on the answers space when
the w increases and a local search when the w decreases.
At the beginning of the algorithm, w parameter has its
highest amount (1 amount) and we have set the increase
of w parameter in the algorithm for the occasions where
the decrease in the amount of w can trap us in local
optimum and increasing the amount of w can give us
the chance to get away from local optimum.

Figure 4. Comparison of the effectiveness of the proposed algorithm


with other methods using Sphere evaluation function

VII. EXPERIMENTAL EXPERIENCE


The observations were done on four standard functions
which are usually used as a base measure of optimization
algorithms. Used functions are these four functions,
Ackley, Rastrigin, Rosenbrock, and Sphere and are
defined by the following equations.

f ( x) = 20 + e 20e
f ( x) =

n 1

(100 ( x
i =1

f ( x) =

(x

2
i

i =1

f ( x) = xi
i =1

1 n 2
xi
n i =1

0.2

1 n
cos( 2xi )
e n i =1

(6)

i +1

x i ) 2 + ( x i 1) 2 )

10 cos( 2 x i ) + 10 )

(7)
(8)

Figure 5. Comparison of the effectiveness of the proposed algorithm


with other methods using Rosenbrock evaluation function

(9)

All of these functions have the global optimization


amount of 0. Experiments with values of n (the number of
function dimensions) with 10, 20 and 30 have been made.
Total population and number of steps, have respectively
been 25 and 1000, and the Automata learning rate used
has been assumed 0.001. Type of neighborhood used is
also two dimensional. The experiments were repeated 30
times and the best and average results are presented.
In the following, this method has been compared to
several other methods. As you can see, the proposed
algorithm shows better results compared with the similar
ones. As it can best balance between the global search
and local search by learning the function behavior and the
feedback it takes from the environment. And also as
mentioned earlier the possibility for this algorithm to
escape from local optimums is higher compared to
similar ones.

250

Figure 6. Comparison of the effectiveness of the proposed algorithm


with other methods using Ackley evaluation function

CINTI 2011 12th IEEE International Symposium on Computational Intelligence and Informatics 2122 November, 2011 Budapest, Hungary

TABLE III.
COMPARISON OF THE EFFECTIVENESS OF THE PROPOSED ALGORITHM
WITH OTHER METHODS USING ACKLEY EVALUATION FUNCTION
Average
2.5142
2.4975
1.7850
1.5987
0.0012
6.0254
5.3620
4.8750
4.7914
0.0214
9.1140
8.3401
7.7789
7.6429
0.5180

Figure 7. Comparison of the effectiveness of the proposed algorithm


with other methods using Rastrigin evaluation function

Best
0.0091
0.0004
0.0001
9.13E-5
8.15E-37
0.6920
0.0084
0.0065
0.0018
2.14E-16
2.1109
0.0245
0.0087
0.0057
1.15 E-9

Algorithm
PSO-Std
PSO-T.W
PSOLA-LRP
PSOLA-LRI
CLAPSO
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO

Algorithm
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO

Dimensions

10

20

30

TABLE IV.
COMPARISON OF THE EFFECTIVENESS OF THE PROPOSED ALGORITHM
WITH OTHER METHODS USING RASTRIGIN EVALUATION FUNCTION

TABLE I.
COMPARISON OF THE EFFECTIVENESS OF THE PROPOSED ALGORITHM
WITH OTHER METHODS USING SPHERE EVALUATION FUNCTION
Average
0.3311
0.0020
0.0013
0.0001
2.06E-9
9.2176
7.0709
0.0123
0.0020
0.0021
15.089
5.8210
0.0118
0.0094
0.0043

Best
0.5987
0.0841
0.0124
0.0125
4.58E-11
4.3215
3.7645
2.5874
2.5802
3.14 E-5
7.4618
6.5148
6.2789
6.1524
0.0051

Average
10.148
8.6923
9.3910
9.5378
6.2501
28.210
28.930
21.845
22.726
12.870
57.105
34.165
31.867
32.219
14.780

Dimensions

10

20

30

Best
4.9702
3.6589
3.0879
4.0125
2.3102
10.052
10.998
9.2302
11.625
5.1870
28.492
24.257
16.925
13.720
7.4896

Algorithm
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO

Dimensions

10

20

30

VIII. CONCLUSION
TABLE II.

COMPARISON OF THE EFFECTIVENESS OF THE PROPOSED ALGORITHM


WITH OTHER METHODS USING ROSENBROCK EVALUATION FUNCTION
Average
352.86
11.334
10.998
7.1102
3.7895
31674.
35.240
29.354
31.984
8.1647
412511
89.001
67.051
67.847
21.102

Best
14.397
0.4310
0.3647
0.7301
0.8745
5864.0
14.720
11.487
14.623
9.1025
52057
29.152
23.692
22.987
11.102

Algorithm
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO
PSO-Std
PSO-I.W
PSOLA-LRP
PSOLA-LRI
CLAPSO

Dimensions

10

20

30

In this paper a new method was presented for


improving particle swarm optimization algorithm. In this
proposed approach we have used a cellular learning
automaton for the adjustment of each particles Inertia
weight parameter to make balance between global search
and local search. If we show the Inertia weight parameter
with w. Learning Automata in each cell has three
actions of "Increasing the amount of w, decreasing the
amount of w and to maintain a constant amount of w.
In this method, the learning automata in each generation
make it possible to use both the present velocity and the
personal and group experience of the particles by
adjusting this parameter. In fact this automaton
automatically determines the Inertia weight parameter for
each particle according to the feedback it receives from
the environment in each generation. Simulations results
show that the proposed model shows superior results
compared to the similar models.
ACKNOWLEDGMENT
This work is supported by Islamic Azad University
Ashkezar Branch, Yazd, Iran And Elmi-karbordi

251

M. J. Fattahi Hasan Abad et al. A New Method to Improve the Particle Swarm Optimization using Cellular Learning Automata

university, Yazd, Iran. I would like to thank Dr. M. R.


Meybodi, Dr. M. M. Ebadzadeh for many insightful
suggestions that helped me to properly represent this
work. Most of all, I would like to thank my mother for
her unending support and love.
REFERENCES
[1]

[2]
[3]

[4]

[5]

[6]

[7]
[8]
[9]

[10]
[11]

[12]

[13]

[14]

Eberhart R, Kennedy J, New optimizer using particle swarm


theory [C], In: Proceedings of the sixth international symposium
on micro machine and human science. Japan: Nagoya; 1995. p.
3973.
Kennedy J, Eberhart R, Particle swarm optimization[C], In:
IEEE intl conf on neural networks. Australia: Perth; 1995.
Clerc M, Kennedy J, The particle swarm: explosion stability and
convergence in a multi-dimensional complex space, IEEE Trans
Evolution Comput 2002;6(1):5873.
Eberhart RC, Shi Y, Comparison between genetic algorithms and
particle swarm optimization, In: Evolutionary programming VII:
proceedings of the seventh annual conference on evolutionary
programming, San Diego CA. Berlin: Springer-Verlag; 1998. p.
6116.
Eberhart RC, Shi Y, Particle swarm optimization: developments,
applications and resources, Proceedings of IEEE international
conference on evolutionary computation 2001:816.
Shi Y, Eberhart RC, A modified particle swarms optimizer, In:
Proceedings of the IEEE international conference on evolutionary
computation, 1997, p. 3038.
Shi Y, Eberhart R, A modified particle swarm optimizer [C], In:
IEEE world congress on computational intelligence, 1998, p. 69
73.
Fan HY, A modification to particle swarm optimization
algorithm, Eng Comput 2002;19(8):97089.
He S, QH W, Wen JY, Saunders JR, Paton RC, A particle swarm
optimizer
with
passive
congregation[J],
BioSystems
2004;78:13547.
Liu B, Wang L, Jin YH, Tang F, Huang DX, Improved particle
swarm optimization combined with chaos, Chaos, Solitons &
Fractals 2005;25:126171.
Shi XH, Liang YC, Lee HP, Lu C, Wang LM, An improved GA
and a novel PSO-GA-based hybrid algorithm [J], Inform Process
Lett 2005;93:25561.
Kennedy J, The particle swarm: social adaptation of knowledge,
In: Proc. IEEE international conference on evolutionary
computation, Indianapolis, IN, Piscataway. NJ, USA: IEEE
Service Center; 1997. p. 3038.
Robinson J, Sinton S, Rahmat-Samii Y, Particle swarm, genetic
algorithm, and their hybrids: optimization of a profiled corrugated
horn antenna, In: IEEE antennas and propagation society
international symposium and URSI national radio science
meeting, IEEE, San Antonio, TX, 2002, p. 16875.
Shi Y, Eberhart RC. Parameter selection in particle swarm
optimization. In: Evolutionary programming VII: proceedings of
the seventh annual conference on evolutionary programming, New
York, 1998, p. 591600.

[15] Kennedy J, Mendes R, Population structure and particle swarm


performance, In: Proceedings of the 2002 congress on
evolutionary computation CEC2002. IEEE Press; 2002. p. 1671
6.
[16] Kennedy J, Stereotyping: improving particle swarm performance
with cluster analysis, Proceedings of the IEEE international
conference on evolutionary computation 2000:150712.
[17] Trelea IC, The particle swarm optimization algorithm:
convergence analysis and parameter selection, Inform Process
Lett 2003;85(6):31725.
[18] Ji MJ, Tang HW, Application of chaos in simulated annealing,
Chaos, Solitons & Fractals 2004;21:93341.
[19] Lu Z, Shieh LS, Chen GR, On robust control of uncertain chaotic
systems: a sliding-mode synthesis via chaotic optimization,
Chaos, Solitons & Fractals 2003;18:81927.
[20] Shi, Y. and Eberhart, R., Fuzzy Adaptive Particle Swarm
Optimization, Proc. Congress on Evolutionary Computation
2001, Seoul, Korea, 2001.
[21] I. Rechenberg, Evolutions strategie: Optimierung technischer
Systeme nach Prinzipien der biologischen Evolution, FrommannHolzboog, Stuttgart, 1973.
[22] S. Kirkpatrick and C. D. Gelatt and M. P. Vecchi, Optimization
by Simulated Annealing, Science, Vol 220, Number 4598, pages
671-680, 1983.
[23] A. Nickabadi, M. Mehdi Ebadzadeh, R. Safabakhsh, DNPSO: A
Dynamic Niching Particle Swarm Optimizer for Multi-Modal
Optimization, to appear in EEE World Congress on
Computational Intelligence (WCCI), 2008.
[24] X. Li, Adaptively Choosing Neighborhood Bests using Species in
a Particle Swarm Optimizer for Multimodal Function
Optimization, Proc. of GECCO 2004, LNCS 3102, eds. Deb, K.
et al., Springer-Verlag, Seattle, USA, pp. 105-116, 2004.
[25] A. Nickabadi, M. Mehdi Ebadzadeh, R. Safabakhsh, Evaluating
the performance of DNPSO in Dynamic environments, to appear
in IEEE International Conference on Systems, Man, and
Cybernetics 2008, 2008.
[26] Mitchell,M, James P. Crutch_eld, Rajarshi Das, Evolving
Cellular Automata with Genetic Algorithms, Proceedings of the
First International Conference on EvolutionaryComputation and
Its Applications (EvCA'96) Moscow, Russia: Russian Academy of
Sciences.
[27] Wolfram, S, Cellular Automata and Complexity, Perseus Books
Group, 1994.
[28] Narendra, K. S. and Thathachar, M. A. L, Learning Automata:
An introduction, Prentice Hall, 1989.
[29] Najim, K. and Poznyak, A. S., editors, Learning automata: theory
and application, Tarrytown, New York: Elsevier Science
Publishing Ltd., 1994.
[30] Meybodi, M. R., Beigy, H. and Taherkhani, M, Cellular Learning
Automata, Proceedings of 6th Annual International Computer
Society of Iran Computer Conference CSICC2001, Isfahan, Iran,
pp. 153-163, 2001.
[31] Meybodi, M. R., Beigy, H. and Taherkhani, M, Cellular Learning
Automata and Its Applications, Journal of Science and
Technology, University of Sharif, No. 25, pp.54-77,
Autumn/Winter 2003-2004.

252

Anda mungkin juga menyukai