Anda di halaman 1dari 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/4086754

A method to solve the four-coloring problem by the neural network with self-
feedback

Conference Paper · September 2003


DOI: 10.1109/SICE.2003.1324148 · Source: IEEE Xplore

CITATIONS READS

0 51

4 authors, including:

Hiroki Tamura Tang Zheng


Miyazaki University Shanghai Jiao Tong University
133 PUBLICATIONS   542 CITATIONS    211 PUBLICATIONS   1,233 CITATIONS   

SEE PROFILE SEE PROFILE

Masahiro Ishii
Sapporo City University
85 PUBLICATIONS   603 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Evolutionary Computation and Dendritic Neuronal Computation View project

the Excellent Teachers Grant of Capital University of Economics and Business View project

All content following this page was uploaded by Hiroki Tamura on 22 February 2014.

The user has requested enhancement of the downloaded file.


SICE Annual Conference in Fukui, August 4-6, 2003
Fukui University, Japan

A Method to Solve the Four-Coloring Problem by


the Neural Network with Self-Feedback
Wei-Dong SUN1, Hiroki TAMURA1 , Zheng TANG1 and Masahiro ISHII1
1 Faculty of Engineering, Toyama University, 930-8555, Gofuku 3190,Toyama City,Japan.
sun@hi.iis.toyama-u.ac.jp

Abstract: In this paper, using the neural network with self-feedback, we introduce a method
to solve constraint satisfaction problem. Simulation results on its application to the classic four-
coloring problem illustrate its e®ectivity and practicality.
Keywords: Constraint satisfaction problem, Neural network, Self-feedback, Four-coloring prob-
lem.

1. Instruction the linear sum of weighted signals from the other neu-
rons where the respective weight is the strength of the
The constraint satisfaction problem is a problem to ¯nd synaptic links. In our algorithm the McCulloch-Pitts
a consistent assignment of values to variables. The binary neuron model [3] is used. The McCulloch-Pitts
four-coloring problem is one kind of constraint satis- input/output function is given by Vi = f (Ui ) = 1 when
faction problems. The idea of using neural network to Ui > 0 and 0 otherwise, where Vi and Ui are the output
provide solution originated in 1985 when Hop¯eld and and input of the ith neuron, respectively. Moreover, in
Tank demonstrated that the traveling salesman prob- this paper it is thought that each neuron (see Fig.1) of
lem could be solved using the Hop¯eld neural network the neural network has the function of self-feedback and
[1][2]. Since Hop¯eld and Tank's work, there has been constitutes the network for every constraint conditions
growing interest in the neural network to solve the con- shown in Fig.2.
straint satisfaction problem.
In this paper we propose a new method to solve the
constraint satisfaction problem using the neural net-
work with self-feedback. In this method, all the restric-
tion conditions of a constraint satisfaction problem are
divided into two restrictions: restriction I and restric-
tion II. In one processing step, restriction II is satis¯ed
by setting its value to be 0 and the value of restriction I
is always made on the decreasing direction. The optimal
solution could be obtained when the value of energy, re- Figure 1: A neuron with self-feedback.
striction I and restriction II become 0 at the same time.
In addition, if an optimal solution is obtained, all neu-
rons in the network could be stabilized in the state,
otherwise, an unstable state could be formed. A self-
feedback function of neuron is employed in the neural
network to carry out present state maintenance in this
paper. To verify the validity of the proposed method,
we apply it to four-coloring problem. The simulation
results show that the optimal solution can be obtained
at high speed and high convergence rate. Moreover, the
comparison results illustrates the proposed method is
better than other methods. Figure 2: Neural network with self-feedback.

In the neural network with self-feedback, the output


2. The Neural Network with Self- state of neurons is 0 or 1 binary value, and it deter-
feedback mines whether a neuron is ¯ring or not. The output of
a neuron presupposes that it is not ¯ring in the case of
The mathematical model of the neural network con- 0, and is considered as ¯ring by the case of 1. If the out-
sists of two components: neurons and synaptic links. put state of a neuron is the factor which makes energy
The output signal of transmitted from a neuron propa- function increase, a signal will be given to the neuron to
gates to other neurons through the synaptic links. The reverse the output state in the direction. If the output
state of the input signal of neuron is determined by of state of a neuron is not the factor which makes energy

-2632- PR0001/03/0000-2632 ¥400 © 2003 SICE


Although the constraint satisfaction problem is in the
tendency to become complicated since it consists of sev-
eral condition formulas, if some condition formulas are
satis¯ed in the processing, the problem will be simpli¯ed
and it will become easy to draw a solution. Moreover,
the procedure at the time of drawing the optimum so-
lution by becoming a unary formula decreases, and it is
thought that it becomes possible to draw the optimum
solution in a short time.

3.2 Formulization for restrictions Σ and


Τ
Figure 3: The conceptual ¯gure of the proposed As discussed above, the formula of a constraint satisfac-
method. tion problem consists of two restriction condition: re-
striction Σ and restriction Τ in the proposed method.
Next, we give the mathematic description for the pro-
function increase, it will be made to carry out present posed method.
condition maintenance of the neuron (the stable state [1] For a certain neuron (i) in time (t), the value of
of a neuron). For this, it is necessary to make the out- restriction Σ is assumed to be aij (t), of restriction
put state of a neuron to be saved in a certain form. For Τ to be bik (t). Therefore, the input of neuron (i)
this state preservation, the self-feedback function in the can be described as:
neural network is used in this paper. This enables it to
stabilize a neural network of the minimum value in the Ui (t + 1) = restriction I + restriction II
state of the minimum value on a constraint satisfaction + present state maintenance
problem (energy function =0). Moreover, when a neu-
= aij (t) + bik (t) + Vij (t) (1)
ral network is not in the state of the minimum value,
the neurons are in the non-stable state.
[2] In time (t), the energy function is assumed to
PM PM 0
be j=1 Pj (t) for restriction Σ, k=1 Qk (t) for
PL
3. The Proposed Method to Solve restriction Τ, and, Pj (t) = i=1 aij (t); Qk (t) =
PL
Constraint Satisfaction Prob- i=1 bik (t). So, the energy function of network can
be given by
lem M M 0
X X
In this section, we describe a new method to solve con- E= Pj (t) + Qk (t) (2)
j=1 k=1
straint satisfaction problem using the neural network
with self-feedback. Note that the process in the neural
network is sequential. Here, L is the number of neurons, (1; 2; ¢ ¢ ¢ ; i; ¢ ¢ ¢ ; L)㧧M ,
neurons number of restriction Σ (1; 2; ¢ ¢ ¢ ; j; ¢ ¢ ¢ ; M )㧧
M 0, neurons number of restriction Τ
3.1 Simpli¯cation of constraint satisfac- (1; 2; ¢ ¢ ¢ ; k; ¢ ¢ ¢ ; M 0 ):
tion problem Thus, according to the de¯nition above, the following
conditions will be satis¯ed for restrictions Σ and Τ in
A constraint satisfaction problem usually consists of the proposed method.
several restriction condition formulas. In the proposed
method, we classify these restriction condition formulas [Restriction Σ] Restrictions Σ is formulized as fol-
into two kinds: restrictions Σ and restrictions Τ. The lowing.
conceptual ¯gure of this proposed method is shown in M M
X X
Fig.3. Restriction Σ always is carried out in the de- Pj (t + ®) ¡ Pj (t) <= 0 (3)
creasing direction; restriction Τ are satis¯ed by setting j=1 j=1
its value to be 0 in processing step, in other words, if a
certain value of restriction Τ increases , only the same This formula makes the energy function always go
quantity will decrease, and it returns to 0 surely. The in the reduction direction. However, t + ® is the
optimum solution can be calculated when the values time when the processing in the neural network is
of both restrictions become 0. State change is sequen- su±cient in time t.
tially performed for every neuron. If each neuron is in [Restriction Τ] Restrictions Τ is formulized as fol-
the state of satisfying restrictions, it can be stabilized lowing.
in the state, on the contrary, it becomes unstable if it
is in the state where it does not satisfy the restrictions. bik (t + 1) ¡ bik (t) <= 0 (4)

-2633-
When a step of processing is carried out about neu- method so that it took many hours to solve a large
ron (i), this formula presents that the value of re- problem. Their computation time may be proportional
strictions Τ at time t + 1 is the same as that at to O(x2 ) where x is the number of the regions. More-
time t, or less than it. Even if bik (t) of a neuron over, few parallel algorithms have been reported. Dahl
increases temporarily according to the conditions [5], Moopenn et al. [6], and Thakoor et al. [7] have pre-
of restrictions Σ, or the conditions of the restric- sented the ¯rst neural network for map K-colorability
tions Τ of other neurons, while carrying out one problems.
PM 0
loop, it returns to the state of k=1 Qk (t) = 0
again. Therefore, it is not necessary to formulize
like restriction Σ so that the energy function may
be made to go in the reduction direction.

3.3 Algorithms
The following procedure describes the algorithms for
solving constraint satisfaction problems based on the Figure 4: An example of 7-region map.
proposed method. Note that t is the step number and
t limit is maximum number of iteration step.
(1) Set t = 0; 4t = 1; and set t limit and other
parameters.
(2) Randomize the initial value of Ui (0) for i =
1; 2; ¢ ¢ ¢ ; N:
(3) evaluate the values of Vi (0) according to the value
Ui (0).
(4) for t = 1 to t limit, do:
(a) initialize Ui = 0; for i = 1; 2; ¢ ¢ ¢ ; N: Figure 5: Neural representation of the 7-region map.
(b) Update the Ui (t + 1) and Vi (t + 1) for i =
1; 2; ¢ ¢ ¢ ; N:
(c) Calculate the energy E. In order to map the four-coloring problem to the neu-
(d) Check system energy. If E = 0 (the optimum ral network, a x £ 4 two dimensional neural array is
solution can be obtained), end the procedure. needed, where x is the number of regions to be colored,
end do and a single region requires four neurons for the single-
color assignment. 7-region map are colored by four col-
ors as shown in Fig.4. If red, yellow, blue and green are
4. Application to four-coloring represented by 1000; 0100; 0010 and 0001, respectively,
problem the neural representation for the problem is given in
Fig.5, where a 7 £ 4 neural array is used. Fig.5 also
shows the 7 £ 7 adjacency matrix D of the seven-region
4.1 About four-coloring problem
map, which gives the boundary information between re-
A mapmaker colors adjacent countries with di®erent gions, where Dxy = 1.
colors so that they may be easily distinguished. This is
not a problem as long as one has a large number of col-
ors. However, it is more di±cult with a constraint that 4.2 The motion equation and energy
one must use the minimum number of colors required for function
a given map. It is still easy to color a map with a small
number of regions. In the early 1850's, Francis Guthrie According to the four-coloring problem constraint con-
was interested in this problem, and he brought it to the ditions, we can obtain the motion equation for c neuron
attention of Augustus De Morgan. Since then many as
mathematicians, including Arthur Kempe, Peter Tait,
Percy Heawood, and others tried to prove the prob- Uxc (t + 1) = restriction I + restriction II
lem that any planar simple graph can be colored with + present state maintenance
four colors. A four-coloring problem is de¯ned that one X4 XN

wants to color the regions of a map in such a way that no = ¡A( Vxk ¡ 1) ¡ B Dxy Vyc
two adjacent regions (that is, regions sharing some com- k=1 y=1
mon boundary) are of the same color. In August 1976, +Vxc (t) + dAxc (5)
Appel and Haken presented their work to members of
the American Mathematical Society [4]. They showed where, A and B are positive constants, D is the adja-
a computer-aided proof of the four-coloring problem. cency matrix, and Vxk is the output of kth neuron in
However, their coloring was based on the sequential the x region.

-2634-
And depending upon the Uxc , the Vxc can be deter-
mined by ½
1 (Uxc > 0)
Vxc = (6)
0 (Uxc <= 0)
In the equation (5), the ¯rst term represents the row
constraint in the neural array, which forces one region
to be colored by one and only one color. It corresponds
to restriction Σ P
in this paper. In the case
P that no color
neuron is ¯ring, 4k=1 Vxk = 0; then ¡( 4k=1 Vxk ¡1) =
+1. It suggests that the value of U is changed on the
positive direction. That is, V is drawn towards ¯ring
direction.
P4 In the case that only P4 one color neuron is
¯ring, k=1 Vxk = 1; then ¡( k=1 Vxk ¡ 1) = 0. It
suggests there is no change in U . Similarly, in the case
that two or over two color neurons are ¯ring, the value
of U is changed in the negative direction. That is, V is
drawn towards non-¯ring direction. Figure 7: The change situation of energy (48 Regions).

Figure 6: An example for a colorless state.

The second term represents the same color neuron


cannot be arranged in the adjacent regions. It corre-
sponds the restriction Τ in this paper. Dxy Vyc becomes
+1 only in the case that the same color neuron is ¯ring
in the two adjacent regions x and y. This is because
Dxy = 0 if region x, y are not adjacent regions and
Vyc =0 if the same color neurons are colored. Thus, it
can be said that the second term is the sum of ¯ring c Figure 8: The change situation of energy (110 Regions).
neuron in the region adjoined with x region. Since ¡B
is multiplied to this sum, the more this sum is, the more
V is drawn towards non-¯ring direction eventually.
The third term represents the color neuron state be-
fore updating, and this realizes the self-feedback of neu-
rons. In addition, since state change in the proposed
method is sequentially performed for every neuron, the
value of Vxc at time (t) and time (t + 1) is intermin-
gled. In equation (5), the ¯rst term becomes 0 and
the second term becomes 0 or less than 0 when each
restriction conditions are ful¯lled. It turns out that it
will not be realized if there is no self-feedback in order
to hold the front state before updating while the ¯rst
term and the second term are ¯lling restrictions condi-
tions with 0. Moreover, in the case that restriction Σ
is satis¯ed (the ¯rst term =0) and restriction Τ is not
satis¯ed (the second term 6= 0), the Vxc of a neuron is
extinguished in order to satisfy the restriction Τ even
if the neuron is ¯ring. That is, the output state of a
neuron is determined that restrictions Τ will always be Figure 9: The change situation of energy (210 Regions).
satis¯ed. This suggests that the convergence state of
the neural network is in the state of a correct answer,

-2635-
Table 1: Simulation results of four-coloring problems.
Regions Takefuji's method [8] Yamada's method [9] Proposed method
N Conv.(%) Ave.step Conv.(%) Ave.step Conv.(%) Ave.step
48 100 89 100 6 100 12.91
110 - - - - 100 54.17
210 100 769 100 49 100 11.22

or the state where restrictions Σ are not satis¯ed. To energy, restriction Σ and restriction Τ until a optimum
avoid the state where restrictions Σ is not satis¯ed, the solution is obtained. First, in the initial state we let
fourth term is employed. Vxc =0, the change situation of energy, restriction Σ and
The fourth clause dAxc , which is a special term, has restriction Τ are illustrated in Fig.7, Fig.8 and Fig.9, re-
the motion to ¯re a neuron of some region (the round spectively. The initial value of dAxc is set to be 0, and
region of middle in Fig.6) forcibly when surrounding of after 10 steps (It is judged as the convergence state of
this region is colored by all four colors as shown in Fig.6, the neural network.), it is set to be 100. As shown in
and it is impossible to also place all the color. That is, Fig.7, the minimum value can be obtained in 1 step for
when there is no color in the region in the convergent the 48 regions. Fig.7, Fig.8 and Fig.9 illustrate that re-
state, the neural network gives a positive big value to striction Σ is drawn on the decreasing direction and the
dAxc , and make a neuron to ¯re forcibly. restriction Τ is always satis¯ed until the convergence
The energy function which arranged in the four color state of the neural network is obtained. It turns out
problem is given by the following formula. that this is in agreement with that explained in Section
0 1 3 (see Fig.3).
XM M0
X
E=@ Pj (t) + Qk (t) A The comparison with other methods Next, in or-
j=1 k=1
der to compare the proposed method with the other
N 4
1 XX methods, the simulations are performed 100 times from
= A ( Vxc ¡ 1)2 di®erent initial state. As examples, the other methods
2 x=1 c=1
include Takefuji method [8] and Yamada's method [9].
N N 4
1 XXX The datas of Takefuji method and Yamada method used
+ B Dxy Vxc Vyc (7) in this paper are from the data carried by the paper [9].
2 x=1 y=1 c=1
The regions that can be compared are the map of 48
The ¯rst term is the constraint that a region is colored regions and 210 regions, and we use the convergence
by one and only one color. If the constraint is satis- rate and the number of average steps of each solution
¯ed, the value of the ¯rst term becomes 0, otherwise, method for comparison. Here, the number of average
becomes positive value. The second term is the con- steps is the average value of the number of steps re-
straint that adjoined region cannot be colored by the quired for the convergence. The convergence rate ex-
same color. If the constraint is satis¯ed, the value of presses the average convergence times on the optimum
the ¯rst term becomes 0, otherwise, becomes positive solution during 100 trial. The simulation results are
value. On the whole, the energy function takes only shown in Table 1.
positive value or 0, and its value can increase if restric- Table 1 shows that the 100% convergence rate can be
tions are not satis¯ed. Therefore, the energy function obtained in all solution methods. It turns out that the
of the neural network can be changed on the decreasing minimum value can be calculated with the fewest num-
direction if restrictions are satis¯ed. When the values ber of steps for the 48 regions map in the Yamada's
of two restrictions become 0, the value of the energy method. However, as shown in Fig.7, the minimum
becomes 0, too. Thus, the optimum solution can be value can also be calculated at only one step by the pro-
obtained. posed method depending on the initial state of neurons.
By the simulation result of 210 regions, the minimum
value can be calculated with the fewest number of steps
4.3 Simulation Results in our method. It depicts that the proposed method is
better than the other methods.
In this section, we apply the proposed method to the
four-coloring problem. Simulations are performed on
three kinds of maps: 48 regions (American Map), 110
regions, and 210 regions. The parameters are set to be:
5. Conclutions
A = 1; B = 1; t limit = 1000: In this paper, we proposed a new method to solve the
constraint satisfaction problems using the neural net-
The Change situation of the energy This simu- work with self-feedback. In this method, all the restric-
lation aims to observe the change situations of system tion conditions of a constraint satisfaction problem are

-2636-
divided into two restrictions: restriction I and restric-
tion II. In processing step, restriction II is satis¯ed by
setting its value to be 0 and the value of restriction I
is always made on the decreasing direction. The op-
timum solution could be obtained when the values of
energy, restriction I and restriction II become 0 at the
same time. As typical example of constraint satisfac-
tion problems, the proposed method was demonstrated
by simulating four-coloring problem. The simulation
results showed that the proposed method could reduce
the average steps in 100% convergence rate and perform
better than the other methods.

References
[1] J.J.Hop¯eld and D.W.Tank, "Neural" computation
of decisions in optimizatiton problems, Biol.Cybern.,
No.52, pp.141-152, 1985.

[2] J.J.Hop¯eld and D.W. Tank, Computing with neu-


ral circuits: A model, Science, No.233, pp.625-633,
1986.

[3] W.S.MuClloch and W.Pitts, A logical calcu-


lus of the ideas immanent in nervous activity,
Bull.Math.Biophysics, No.5, pp.115-133, 1943.

[4] K. Appel and W. Haken, The solution of the four-


color-map problem, Scienti¯c American, pp.108-121,
Oct.1977

[5] E.D. Dahl, Neural network algorithm for an NP-


complete problem: Map and graph coloring, in Proc.
First Int. Conf. On Neural Networks, Vol.III, pp.113-
120, 1987.

[6] A. Moopenn et al., A neurocomputer based on an


analog-digital hybrid architecture, in Proc. First Int,
Conf. Neural Networks, Vol. III, pp.479-486, 1987.

[7] A. P. Thakoor et al., Electronic hardware implemen-


tations of neural networks, Appl. Opt. Vol. 26, pp.
5085-5092, 1987.
[8] Y.Takefuji and K.C.Lee, Arti¯cial neural networks
for four-coloring map problem and K-Colorability
problems, IEEE Trans. Circuits & Syst., 38, 3,
pp.326-333, 1991.

[9] Yamada et al, Four-coloring probelm algorithm us-


ing the maximum neural network, IEICE Technology
report, NLP97-144,NC97-96,pp.59-66, 1998.

-2637-

View publication stats

Anda mungkin juga menyukai