net/publication/4086754
A method to solve the four-coloring problem by the neural network with self-
feedback
CITATIONS READS
0 51
4 authors, including:
Masahiro Ishii
Sapporo City University
85 PUBLICATIONS 603 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
the Excellent Teachers Grant of Capital University of Economics and Business View project
All content following this page was uploaded by Hiroki Tamura on 22 February 2014.
Abstract: In this paper, using the neural network with self-feedback, we introduce a method
to solve constraint satisfaction problem. Simulation results on its application to the classic four-
coloring problem illustrate its e®ectivity and practicality.
Keywords: Constraint satisfaction problem, Neural network, Self-feedback, Four-coloring prob-
lem.
1. Instruction the linear sum of weighted signals from the other neu-
rons where the respective weight is the strength of the
The constraint satisfaction problem is a problem to ¯nd synaptic links. In our algorithm the McCulloch-Pitts
a consistent assignment of values to variables. The binary neuron model [3] is used. The McCulloch-Pitts
four-coloring problem is one kind of constraint satis- input/output function is given by Vi = f (Ui ) = 1 when
faction problems. The idea of using neural network to Ui > 0 and 0 otherwise, where Vi and Ui are the output
provide solution originated in 1985 when Hop¯eld and and input of the ith neuron, respectively. Moreover, in
Tank demonstrated that the traveling salesman prob- this paper it is thought that each neuron (see Fig.1) of
lem could be solved using the Hop¯eld neural network the neural network has the function of self-feedback and
[1][2]. Since Hop¯eld and Tank's work, there has been constitutes the network for every constraint conditions
growing interest in the neural network to solve the con- shown in Fig.2.
straint satisfaction problem.
In this paper we propose a new method to solve the
constraint satisfaction problem using the neural net-
work with self-feedback. In this method, all the restric-
tion conditions of a constraint satisfaction problem are
divided into two restrictions: restriction I and restric-
tion II. In one processing step, restriction II is satis¯ed
by setting its value to be 0 and the value of restriction I
is always made on the decreasing direction. The optimal
solution could be obtained when the value of energy, re- Figure 1: A neuron with self-feedback.
striction I and restriction II become 0 at the same time.
In addition, if an optimal solution is obtained, all neu-
rons in the network could be stabilized in the state,
otherwise, an unstable state could be formed. A self-
feedback function of neuron is employed in the neural
network to carry out present state maintenance in this
paper. To verify the validity of the proposed method,
we apply it to four-coloring problem. The simulation
results show that the optimal solution can be obtained
at high speed and high convergence rate. Moreover, the
comparison results illustrates the proposed method is
better than other methods. Figure 2: Neural network with self-feedback.
-2633-
When a step of processing is carried out about neu- method so that it took many hours to solve a large
ron (i), this formula presents that the value of re- problem. Their computation time may be proportional
strictions Τ at time t + 1 is the same as that at to O(x2 ) where x is the number of the regions. More-
time t, or less than it. Even if bik (t) of a neuron over, few parallel algorithms have been reported. Dahl
increases temporarily according to the conditions [5], Moopenn et al. [6], and Thakoor et al. [7] have pre-
of restrictions Σ, or the conditions of the restric- sented the ¯rst neural network for map K-colorability
tions Τ of other neurons, while carrying out one problems.
PM 0
loop, it returns to the state of k=1 Qk (t) = 0
again. Therefore, it is not necessary to formulize
like restriction Σ so that the energy function may
be made to go in the reduction direction.
3.3 Algorithms
The following procedure describes the algorithms for
solving constraint satisfaction problems based on the Figure 4: An example of 7-region map.
proposed method. Note that t is the step number and
t limit is maximum number of iteration step.
(1) Set t = 0; 4t = 1; and set t limit and other
parameters.
(2) Randomize the initial value of Ui (0) for i =
1; 2; ¢ ¢ ¢ ; N:
(3) evaluate the values of Vi (0) according to the value
Ui (0).
(4) for t = 1 to t limit, do:
(a) initialize Ui = 0; for i = 1; 2; ¢ ¢ ¢ ; N: Figure 5: Neural representation of the 7-region map.
(b) Update the Ui (t + 1) and Vi (t + 1) for i =
1; 2; ¢ ¢ ¢ ; N:
(c) Calculate the energy E. In order to map the four-coloring problem to the neu-
(d) Check system energy. If E = 0 (the optimum ral network, a x £ 4 two dimensional neural array is
solution can be obtained), end the procedure. needed, where x is the number of regions to be colored,
end do and a single region requires four neurons for the single-
color assignment. 7-region map are colored by four col-
ors as shown in Fig.4. If red, yellow, blue and green are
4. Application to four-coloring represented by 1000; 0100; 0010 and 0001, respectively,
problem the neural representation for the problem is given in
Fig.5, where a 7 £ 4 neural array is used. Fig.5 also
shows the 7 £ 7 adjacency matrix D of the seven-region
4.1 About four-coloring problem
map, which gives the boundary information between re-
A mapmaker colors adjacent countries with di®erent gions, where Dxy = 1.
colors so that they may be easily distinguished. This is
not a problem as long as one has a large number of col-
ors. However, it is more di±cult with a constraint that 4.2 The motion equation and energy
one must use the minimum number of colors required for function
a given map. It is still easy to color a map with a small
number of regions. In the early 1850's, Francis Guthrie According to the four-coloring problem constraint con-
was interested in this problem, and he brought it to the ditions, we can obtain the motion equation for c neuron
attention of Augustus De Morgan. Since then many as
mathematicians, including Arthur Kempe, Peter Tait,
Percy Heawood, and others tried to prove the prob- Uxc (t + 1) = restriction I + restriction II
lem that any planar simple graph can be colored with + present state maintenance
four colors. A four-coloring problem is de¯ned that one X4 XN
wants to color the regions of a map in such a way that no = ¡A( Vxk ¡ 1) ¡ B Dxy Vyc
two adjacent regions (that is, regions sharing some com- k=1 y=1
mon boundary) are of the same color. In August 1976, +Vxc (t) + dAxc (5)
Appel and Haken presented their work to members of
the American Mathematical Society [4]. They showed where, A and B are positive constants, D is the adja-
a computer-aided proof of the four-coloring problem. cency matrix, and Vxk is the output of kth neuron in
However, their coloring was based on the sequential the x region.
-2634-
And depending upon the Uxc , the Vxc can be deter-
mined by ½
1 (Uxc > 0)
Vxc = (6)
0 (Uxc <= 0)
In the equation (5), the ¯rst term represents the row
constraint in the neural array, which forces one region
to be colored by one and only one color. It corresponds
to restriction Σ P
in this paper. In the case
P that no color
neuron is ¯ring, 4k=1 Vxk = 0; then ¡( 4k=1 Vxk ¡1) =
+1. It suggests that the value of U is changed on the
positive direction. That is, V is drawn towards ¯ring
direction.
P4 In the case that only P4 one color neuron is
¯ring, k=1 Vxk = 1; then ¡( k=1 Vxk ¡ 1) = 0. It
suggests there is no change in U . Similarly, in the case
that two or over two color neurons are ¯ring, the value
of U is changed in the negative direction. That is, V is
drawn towards non-¯ring direction. Figure 7: The change situation of energy (48 Regions).
-2635-
Table 1: Simulation results of four-coloring problems.
Regions Takefuji's method [8] Yamada's method [9] Proposed method
N Conv.(%) Ave.step Conv.(%) Ave.step Conv.(%) Ave.step
48 100 89 100 6 100 12.91
110 - - - - 100 54.17
210 100 769 100 49 100 11.22
or the state where restrictions Σ are not satis¯ed. To energy, restriction Σ and restriction Τ until a optimum
avoid the state where restrictions Σ is not satis¯ed, the solution is obtained. First, in the initial state we let
fourth term is employed. Vxc =0, the change situation of energy, restriction Σ and
The fourth clause dAxc , which is a special term, has restriction Τ are illustrated in Fig.7, Fig.8 and Fig.9, re-
the motion to ¯re a neuron of some region (the round spectively. The initial value of dAxc is set to be 0, and
region of middle in Fig.6) forcibly when surrounding of after 10 steps (It is judged as the convergence state of
this region is colored by all four colors as shown in Fig.6, the neural network.), it is set to be 100. As shown in
and it is impossible to also place all the color. That is, Fig.7, the minimum value can be obtained in 1 step for
when there is no color in the region in the convergent the 48 regions. Fig.7, Fig.8 and Fig.9 illustrate that re-
state, the neural network gives a positive big value to striction Σ is drawn on the decreasing direction and the
dAxc , and make a neuron to ¯re forcibly. restriction Τ is always satis¯ed until the convergence
The energy function which arranged in the four color state of the neural network is obtained. It turns out
problem is given by the following formula. that this is in agreement with that explained in Section
0 1 3 (see Fig.3).
XM M0
X
E=@ Pj (t) + Qk (t) A The comparison with other methods Next, in or-
j=1 k=1
der to compare the proposed method with the other
N 4
1 XX methods, the simulations are performed 100 times from
= A ( Vxc ¡ 1)2 di®erent initial state. As examples, the other methods
2 x=1 c=1
include Takefuji method [8] and Yamada's method [9].
N N 4
1 XXX The datas of Takefuji method and Yamada method used
+ B Dxy Vxc Vyc (7) in this paper are from the data carried by the paper [9].
2 x=1 y=1 c=1
The regions that can be compared are the map of 48
The ¯rst term is the constraint that a region is colored regions and 210 regions, and we use the convergence
by one and only one color. If the constraint is satis- rate and the number of average steps of each solution
¯ed, the value of the ¯rst term becomes 0, otherwise, method for comparison. Here, the number of average
becomes positive value. The second term is the con- steps is the average value of the number of steps re-
straint that adjoined region cannot be colored by the quired for the convergence. The convergence rate ex-
same color. If the constraint is satis¯ed, the value of presses the average convergence times on the optimum
the ¯rst term becomes 0, otherwise, becomes positive solution during 100 trial. The simulation results are
value. On the whole, the energy function takes only shown in Table 1.
positive value or 0, and its value can increase if restric- Table 1 shows that the 100% convergence rate can be
tions are not satis¯ed. Therefore, the energy function obtained in all solution methods. It turns out that the
of the neural network can be changed on the decreasing minimum value can be calculated with the fewest num-
direction if restrictions are satis¯ed. When the values ber of steps for the 48 regions map in the Yamada's
of two restrictions become 0, the value of the energy method. However, as shown in Fig.7, the minimum
becomes 0, too. Thus, the optimum solution can be value can also be calculated at only one step by the pro-
obtained. posed method depending on the initial state of neurons.
By the simulation result of 210 regions, the minimum
value can be calculated with the fewest number of steps
4.3 Simulation Results in our method. It depicts that the proposed method is
better than the other methods.
In this section, we apply the proposed method to the
four-coloring problem. Simulations are performed on
three kinds of maps: 48 regions (American Map), 110
regions, and 210 regions. The parameters are set to be:
5. Conclutions
A = 1; B = 1; t limit = 1000: In this paper, we proposed a new method to solve the
constraint satisfaction problems using the neural net-
The Change situation of the energy This simu- work with self-feedback. In this method, all the restric-
lation aims to observe the change situations of system tion conditions of a constraint satisfaction problem are
-2636-
divided into two restrictions: restriction I and restric-
tion II. In processing step, restriction II is satis¯ed by
setting its value to be 0 and the value of restriction I
is always made on the decreasing direction. The op-
timum solution could be obtained when the values of
energy, restriction I and restriction II become 0 at the
same time. As typical example of constraint satisfac-
tion problems, the proposed method was demonstrated
by simulating four-coloring problem. The simulation
results showed that the proposed method could reduce
the average steps in 100% convergence rate and perform
better than the other methods.
References
[1] J.J.Hop¯eld and D.W.Tank, "Neural" computation
of decisions in optimizatiton problems, Biol.Cybern.,
No.52, pp.141-152, 1985.
-2637-