Anda di halaman 1dari 5

ICCC2009 • IEEE 7th International Conference on Computational Cybernetics • November 26-29, 2009 • Palma de Mallorca, Spain

Algorithm for Pulse Coupled Neural Network


Parameters Estimation
R. Forgáþ, I. Mokriš*
* Institute of Informatics Slovak Academy of Sciences, Bratislava, Slovak Republic
radoslav.forgac@savba.sk, igor.mokris@savba.sk

Abstract— The paper introduces an approach for linking OM-PCNN will be introduced. The second section
and decay coefficient estimation in the Optimized Pulse describes algorithm of linking and decay coefficient
Coupled Neural Network with Modified feeding input (OM- estimation. The final third section shows experiments and
PCNN). These two parameters have significant importance results of feature generation by using estimated
to improve the image recognition precision. The standard parameters.
PCNN has ten parameters, so it is very difficult to find
optimal closed system of all its parameters. By introducing II. PULSE COUPLED NEURAL NETWORK
the OM-PCNN it is needed to set up four parameters only –
linking coefficient, decay coefficient of threshold potential, A. Structure of OM-PCNN
linking radius and PCNN kernel. The rest two parameters The structure of OM-PCNN version will be the same as
of OM-PCNN are adjusted by rule of computational structure of standard PCNN (Fig. 1), i.e. each modified
complexity minimization. version come out from standard PCNN neuron model
(Fig. 2).
I. INTRODUCTION
The application spectrum of PCNN [1] usage is
extended to all areas of image processing nowadays. The
PCNN was used mainly in image segmentation, noise
filtering, feature generation for image recognition, edge
and object detection in the image and also in image
fusion process. There are also known cases of PCNN
application in the speech recognition area. The present
research is oriented mainly to optimization of PCNN
mathematical model and its parameters.
PCNN have several powerful properties such as high
degree of space dimension reduction, minimal set of used
etalons or fixed structure of input object matrices. With
these properties the minimal number of generated Figure 1. Structure of OM-PCNN
features by PCNN is related, too. Next powerful property
is invariance of generated features against rotation,
dilatation and translation of images. PCNN does not
Every neuron (i, j) is associated just with only one
require the learning process. image pixel (i, j) of image matrix S. The neuron receives
On the other hand, the PCNN have several input signals from its own neighborhood O, which consist
disadvantages. The main disadvantage of PCNN is of feeding input OF and linking input OL. In general are OF
difficult mathematical model with high number of and OL circle areas of the same radius r0. Weights of
parameters and related problem with their optimal connections between neuron (i, j) and its neighborhood OF
estimation for feature generation. Next disadvantage is are described by matrix of weight coefficients M and
high number of iteration steps in the feature generation weights of connections between neuron (i, j) and its
process and feature selection problem with the highest neighborhood OL are described by matrix of weight
information value. These problems were very often solved coefficients W. Output quantity Yij of neuron (i, j) have
through an experiment. The much better solution of this impact on activation potential Ukl through its neighbor
problem is to modify the PCNN model [4 - 8]. neurons (k,l) which belong to the neighborhood OL.
Present research by [4 - 8] of the standard PCNN is
focused on the mathematical model optimization and The structure of pulse coupled neuron in the OM-
parameters estimation with aim to reduce the number of PCNN is shown in the Fig. 2. The OM-PCNN neuron
generated features to reach high image recognition consists of input part, linking part and pulse generator. In
performance. The optimized version OM-PCNN has all the input part linking potential Lij(n) is controlled by
assumptions for the dimension reduction of image linking coefficient ȕ and is sequentially modulated with
classification space and for image recognition. feeding potential Fij(n) in the linking part. Pulse generator
consists of step function generator and sigmoid function
The paper is divided into three sections. In the first generator.
section the general structure of OM-PCNN, mathematical
model of OM-PCNN and process of feature generation by

978-1-4244-5311-5/09/$26.00 ©2009IEEE 147


R. Forgác and I. Mokriš • Algorithm for Pulse Coupled Neural Network Parameters Estimation

The feature generation principle of image recognition


by PCNN is based on pulse generation of individual
neurons. The multilevel input image, which is represented
by two-dimensional matrix, is transformed through PCNN
into a sequence of temporary binary images. Each of these
binary images is a matrix with the same dimension as
input matrix and it is generated by group of pixels with
similar intensity.
The feature generation by (8) is invariant against non
standard cases presented in [6]:

¦ (X
ij
ij (n) ⋅ Yij ( n) )
(8)
G ( n) =
Figure 2. Structure of OM-PCNN neuron ¦S ij
ij

where a numerator in fraction represents the value before


B. Mathematical model of OM-PCNN standardization and denominator represents the feature
OM-PCNN is based on PCNN with modified feeding standardization. The activation quantity Xij based on (4)
input (M-PCNN) [10]. It has neither exponential nor belongs to the interval (0, 1) and activation quantity Yij
convolution part in equation for feeding potential of based on (5) may obtain only two values, namely 0 a 1.
neuron Fij(n). The OM-PCNN mathematical model can be The OM-PCNN algorithm has following form:
described by following equations [4 - 8]: 1. Calculation of convolution matrix K for every iteration
n by (7).
Fij (n) = Sij (1)
2. Calculation of internal activation quantity Uij by (3).
3. Calculation of output quantity Xij by (4) and Yij by (5).
Lij (n) = Kij (n) (2)
4. Calculation of threshold potential Τij by (6).
OM-PCNN was used as background for design of
U ij (n) = Fij (n) ⋅ (1 + β o ⋅ Lij (n)) PCNN parameters estimation for the better feature
(3)
generation purposes.
1 (4)
X ij (n) = Tij ( n −1) −U ij ( n )
III. OM-PCNN PARAMETERS ESTIMATION
1+ e
A. Estimation assumptions
1 if X ij (n)>0,5 (5) It holds in general that optimal feature generation of
Yij (n) = {
0 otherwise image recognition by PCNN depends on PCNN model
selection and PCNN parameters settings [9].
Tij (n) = { VT if Yij (n) =1
α o ⋅Tij (n −1) if Yij (n)=0
(6) The reason of OM-PCNN optimized model comes out
of analysis of M-PCNN [4] image recognition precision
results. The idea of optimization is based on assumption
where Fij(n) is input feeding potential and Lij(n) input that change of period ǻPij of pulse generation of particular
linking potential of neuron. Sij is an intensity of pixel i,j in neurons causes the incremented variance of generated
the input image matrix S, which represents the intensity of feature values. On the base of analysis of period Pij
given image element. Xij is activation quantity of neuron evolution in particular pulses of G(n) function it was
that is defined by sigmoid transfer function, Yij is shown that the minimal variance of values reach the
activation quantity of neuron that is defined by step features in the first pulse of G(n) function.
function. Uij is the activation quantity of neuron, Tij is In the standard PCNN it is needed to set up to 10
threshold potential of neuron, n is iteration step. Parameter parameters. In the modified M-PCNN the number of
αo is threshold decay coefficient, ȕo is linking coefficient parameters was decreased to 8, but it is still high number
and parameter VT is coefficient of the threshold potential. for estimation purposes. In the proposed OM-PCNN
Matrix elements K(n) are calculated by convolution model it is needed to set only 6 parameters - ro, N, αo, VT,
βo a OM-PCNN kernel. Only two parameters from it (i.e.
K ij (n) = (W ∗ Y (n − 1))ij = ¦ wijkl .Ykl (n − 1) (7) αo, βo) are needed to estimate. The process of continual
reduction and particular parameters calculation can be
described in next five steps:
where * is convolution operator, W is the matrix of
weight coefficients and wijkl is the element of constituent 1. By simplifying of linking potential Lij of neuron
matrix W for the feeding and linking input, Ykl(n-1) is the calculation by (2) there was eliminated parameters ĮL
output quantity of the neuron (k, l) that belongs to the and VL. The number of OM-PCNN parameters
decreased from 8 (M-PCNN) to 6.
neighbor OL of neuron (i, j) in the previous iteration step
n-1. The values of weight coefficients wijkl depend on the 2. By establishing the condition [4]
value of linking radius r0 and on implemented PCNN
kernel for neighbor OF and OL. VT ≥ 6 (9)

148
ICCC2009 • IEEE 7th International Conference on Computational Cybernetics • November 26-29, 2009 • Palma de Mallorca, Spain

parameter VT does not have influence on recognition The algorithm requires creation of two groups of
precision in the first pulse of function G(n). It remains images for which the following conditions hold:
to set 5 parameters yet: N, Įo, ȕo, r0 and OM-PCNN 1. The first group of images represents etalons from each
kernel. image class. The mutual distance between particular
3. The number of iteration steps N it is possible to set etalons in Euclidean distance must be greater than zero.
deterministically. By decreasing of feature generation 2. The second group of images is represented by sample
scope on the first pulse of G(n) function it is achieved of k tested images where distance of etalon from
effect of undesirable influence minimization of particular class is the greatest. From the analysis of
neighborhood on each activated neuron. Thereby, image recognition results by M-PCNN follows that to
number of generated features N will be restricted to this group images with greatest rotation and at the same
scope of local maximum area of the first pulse of G(n) time dilatation influence will be chosen [4]. In the case
function or whole first pulse of G(n) function. The of not choosing images with greatest distance from
neighborhood of local maximum of the first pulse of particular etalons, the parameters Įo and ȕo will be
G(n) function is determined by three features. The optimized only for scope of rotated and at the same
feature generation is finished after achievement of time dilated images ED of which corresponds to chosen
iteration nmax+1, where nmax is iteration by which local images. The sample of tested images must contain at
maximum of the first pulse of G(n) function was least one image from each class.
found. For the classification purposes only three
feature values nmax – 1, nmax and nmax+ 1 were used. In The scopes of intervals in which suboptimal values Įo
the case of first pulse G(n) scope it goes on widening and ȕo will be searched, are defined by following criteria:
of local maximum neighborhood for the whole first 1. Linking coefficient ȕo will be tested on interval (0 : 1>.
pulse of G(n) function. In this case the feature If the parameter value ȕo was equal 0, then the
generation is finished, if the zero feature value was modulation member ȕoKij is eliminated by (3). This
reached or feature amplitude in the given iteration step modulation member contributes to synchronization of
decreases under established threshold value of local PCNN neural pulse generation. By ȕo > 1 PCNN works
maximum amplitude of the first pulse. There was in the multipulse mode [10] which is not suitable for
established threshold value to 0.01 % from first local feature generation because the linking input is saturated
maximum value of G(n) function on the base of and generated G(n) function converges to constant
functional G(n) flows of image etalons. value [3].
4. It can be assumed that with increasing linking area OL 2. The value of decay coefficient Įo is in (0 : 1). From
it is possible to achieve better features for recognition. equation (6) is obvious that if parameter Įo is set to 1,
For each type of PCNN kernel there is boundary by neurons in the PCNN can not generate pulses because
which further linking radius increasing has minimal threshold potential Tij of each neuron is by initialization
impact on the G(n) function [2, 3], because values of greater than 1, on the other side the other quantities, i.e.
weight coefficient nearing to zero. Contrary, the Lij, Uij, Yij equal zero. It is possible to increase bottom
feature generation time is increasing because it is
border of parameter value Įo to Įo ≈ 0.3 because by
needed to recalculate the convolution matrix K in
lower values has G(n) function redented flow (Fig. 3).
every iteration step. From the reason of computation
If G(n) = 1, than Uij(n) = max and if G(n) = 0, than
complexity minimization of feature generation by
Tij(n) = max.
OM-PCNN the linking radius was set to value r0 = 1.
5. The weight coefficients wijkl are calculated by
distribution 1/r [4]. In this case the matrix W has the
dimension 3 x 3 and the values of particular weight
coefficients are given in the following form

ª0 1 0º
W = ««1 1 1»» (10)
«¬0 1 0»¼

The using of another PCNN kernel as for example


kernel with Gaussian distribution or setting of greater
value r0 has not significant influence on the recognition Figure 3. Typical flow of function G(n) for lower values Įo<0.3
precision [2].
On the base of given criteria of the computational
complexity of feature generation it is needed to propose The parameter estimation algorithm of Įo and ȕo has
the algorithm to estimation only of pair of parameters Įo iterative nature. The parameter ȕo is iteratively
and ȕo. incremented with step Δȕo in the main loop and in the
B. Algorithm for parameter estimation inner loop is parameter Įo incremented with step ΔĮo:
The reason for estimation algorithm proposal for Įo and 1. for ȕo = (0 : 1> step Δȕo
ȕo parameters is based on finding of maximal distance ED 2. for Įo = (0 : 1) step ΔĮo
between two nearest images from different classes in 3. Calculation of Euclidean distances eD between
Euclidean space. etalon and tested images of the same class for

149
R. Forgác and I. Mokriš • Algorithm for Pulse Coupled Neural Network Parameters Estimation

all image classes. The values will be stored in step ΔĮo = 0.05 in the inner loop. In regard to large scope
vector ED1(i, k), where i is the index of of processed data there will presented only one part of
corresponding class and k is index of tested results focused on progress monitoring of suboptimal
image. values Įo and ȕo where the values of vector ED are the
highest.
4. Selection of the greatest value eD1(i) from The influence of parameter ȕo on finding the higher
vector ED1(i, k) for each image class i. Euclidean distance Emax of the two nearest textures from
various classification classes is shown on Fig. 4. The
5. Calculation of Euclidean distances eD between functional dependence ED(ȕo) of the nearest images has
etalon and tested images that belong to descending tendency with ascending value ȕo. The highest
different classes for all image classes. ED Emax value was achieved by ȕo = 0.1, i.e. this value is
values will be stored in vector ED2(i, k). considered as suboptimal.
6. Selection of the lowest value eD2(i) from vector
ED2(i, k) for each image class i.
7. Calculation of diference ǻeD(i) = eD2(i) - eD1(i)
for each image class i
¾ if ǻeD(i) > 0, then both etalon and tested
image belong to the same class.
¾ if ǻeD(i) < 0, then etalon and tested image
are from different classes.
8. Calculation of average value ǻe*D from all
ǻeD(i) calculated in the step 7 for set values Įo
and ȕo. The averages values ǻe*D for each pair
of parameters Įo and ȕo will be stored in vector Figure 4. Influence of parameter ȕo on finding the higher
E D. Euclidean distance Emax
9. end
10. end
11. Selection of the greatest value Emax of vector ED. The finding of optimal parameter Įo with the highest
Parameter values Įo and ȕo with corresponding value of Euclidean distance Emax of the two nearest
maximal value Emax are considered as suboptimal textures from various classification classes is shown on
values. Fig. 5.

In consideration of unreality to test the couple of


parameters Įo a ȕo in the whole continual interval for Įo =
(0 : 1) and ȕo = (0 : 1> it is possible to find only their
suboptimal values by using the proposed algorithm.
The algorithm calculation complexity depends on
number of image classes, on number of testing image
sample set in the second group and on setting of
incremental step of estimated parameters Įo a ȕo.
IV. RESULTS OF EXPERIMENTS
The testing image set was created from combination of
particular image geometrical transformations. In the case
of test image set enlarging on next image class, it is
sufficient to add to training set new etalon that is Figure 5. Influence of parameter ao on finding the higher
Euclidean distance Emax
represented by corresponding feature vector G(n).
The etalon set and the testing set are created from
texture images. The training set consists of 13 textures in On basis on results of algorithm for suboptimal
the basic form. There are images of bark, bricks, bubbles, parameters Įo and ȕo finding in the OM-PCNN was for the
grass, leather, pigskin, wave, wood, raffia, sand, straw, testing set of textures found the pair Įo = 0.6 and ȕo = 0.1
water and wool. The original database [11] with 13 at the constant setting of parameters VT = 20 and r0 = 1.
images in the basic form was processed into matrix of 450
x 450 image pixels. Each image class was enlarged on set The theoretical assumptions described in this paper of
of rotated, dilated and combination of both rotated and OM-PCNN with algorithm for parameter Įo and ȕo
dilated images. The testing set contains 4784 images estimation was confirmed by experiment. The recognition
together. precision comparison of OM-PCNN with only 3 generated
features and M-PCNN with 3, 10 and 50 features is shown
The estimation of parameters Įo and ȕo was realized by on Fig. 6. The best recognition precision at once and the
algorithm described in previous section of this paper. In shortest feature generation time was achieved by OM-
the algorithm was incremented parameter ȕo step by step PCNN.
with step Δȕo = 0.1 in the main loop and parameter Įo with

150
ICCC2009 • IEEE 7th International Conference on Computational Cybernetics • November 26-29, 2009 • Palma de Mallorca, Spain

OM-PCNN, 3 features M-PCNN, 3 features


Theory and Applications, IOS Press, Amsterdam, 2002, ISBN 1-
M-PCNN, 10 features M-PCNN, 50 features
58603-256-9, pp. 304 – 308.
[4] R. Forgáþ, Dimension Reduction of Image Classification Space by
1
0,9
Pulse Coupled Neural Networks. [PhD thesis]. FEI TU Košice,
2005, 108 p. (in Slovak).
Recognition precision

0,8
0,7 [5] R. Forgáþ and I. Mokriš, “Influence of Pulse Coupled Neural
0,6 Network Initialization on Classification Tasks”, in Proc. of
0,5
0,4
Multimedia in Bussiness, 2006, Kielce, ISBN 83-7251-673-1, pp.
0,3 335-342.
0,2 [6] R. Forgáþ and I. Mokriš, “Feature Generation Improving by
0,1 Optimized PCNN”, in SAMI 2008 - 6th International Symposium
0
on Applied Machine Intelligence and Informatics, HerĐany,
nd Slovakia, January 21-22, 2008, IEEE Catalog Number CFP0808E-
f ia
r

w
s

l
n

e
rk

d
Bu s

ss

er

oo
he
k

le

ki

av
oo
ra
Ba

at
af
ic

ra

Sa
gs
bb

W
at

W
St

W
Br

W
CDR, ISBN 978-1-4244-2106-0, pp. 203-207
R
G

Le

Pi

Textures
[7] R. Forgáþ and I. Mokriš, “Threshold Potential Optimization in the
Pulse Coupled Neural Network”, submitted in SISY’08 - 6th
Figure 6. Recognition precision comparison of OM-PCNN and International Symposium on Intelligent Systems and Informatics,
M-PCNN Sept. 26-27, 2008, Subotica, IEEE Catalog Number CFP0884C-
CDR, ISBN 978-1-4244-2407-8.
[8] R. Forgáþ and I. Mokriš, “Linking and Activation Potential
Optimization in the Pulse Coupled Neural Network”, in ICCC
V. CONCLUSION 2008 - IEEE 6th International Conference on Computational
The aim of optimized OM-PCNN proposal with Cybernetics, 2008, ISBN 978-1-4244-2875-5, pp. 85-88.
algorithm for parameter Įo and ȕo estimation was [9] J. L. Johnson and M. L. Padgett, “PCNN Models and
achievement of better reduction of number of generated Applications”, IEEE Transactions on Neural Networks, Vol. 2,
No. 3, 1999, pp. 480-498.
features by preservation of comparable recognition
precision than in the case of M-PCNN. It was also [10] G. Kuntimad and H. S. Ranganath, “Perfect Image Segmentation
Using Pulse Coupled Neural Networks”, IEEE Transactions on
experimentally confirmed. There was achieved a lower Neural Networks, Vol. 2, No. 3, 1999, pp. 591-598.
dispersion of generated features in the scope of the first [11] USC-SIPI Image Database, Signal and Image Processing Institute,
pulse of G(n) function. University of Southern California, Los Angeles
http://sipi.usc.edu/database
ACKNOWLEDGMENT
This work was supported by Slovak Science Agency
VEGA No. 2/7098/27 and VEGA No. 2/0211/09.
REFERENCES
[1] R. Eckhorn et al, “Feature Linking via Synchronization among
Distributed Assemblies: Simulations of Results from Cat Visual
Cortex”, Neural Computation, Vol. 2, 1990, pp. 293-307.
[2] R. Forgáþ and I. Mokriš, “Parameter Influence of Pulse Coupled
Neural Network for Image Recognition”, Journal of Applied
Computer Science, Vol. 9, No. 2, 2001, pp. 31-44.
[3] R. Forgáþ and I. Mokriš, “New Approach to Estimation of Pulse
Coupled Neural Network Parameters in Image Recognition
Process”, in Sinþák, P. et al. (Eds.): Intelligent Technologies –

151

Anda mungkin juga menyukai