Networks,
Intelligent
I. INTRODUCTION
(1)
j =1
and where x j is the value of the jth input neuron, wkj is the
weight between the kth input neuron and the jth hidden layer
neuron, uk is the output of linear combiner, k is the
threshold, (.) is the activation function, and y k is the output
signal of the neuron. In this model, sigmoid function is chosen
to be the activation function for each neuron. The activation
function is presented as follows:
d
1
(2)
( j) =
;
a > 0 and a = w x + b
1 + exp( a )
where
hi i
i =1
xi (t n)
xi (t n + 1)
y (t )
y (t )
xi (t )
y,
ui = g ( x ci i )
i = 1,..., n
w = w E = ( y t y ) w y t
(9)
t =1
wt = ( y t y t 1 ) w y k , t = 1,2,..., m , y m +1 = y
(10)
k =1
and
y = Cu .
w0
x i (t n )
y (t )
w1
xi (t n + 1)
w2
y (t )
xi (t )
wn
wt = ( y t +1 y t ) t k w y k ,
(11)
wt = ( y t +1 y t ) w y k
(12)
MAPE =
0 1
k =1
RMSE =
1
N
i =1
y i y i
100%
yi
N
(1 N ) ( y i yi )2
(13)
(14)
i =1
(15)
Fig. 3: A part extracted from the traffic flow forecasting of station 708 with
BPL algorithm
TABLE II
LEARNING TIME COMPARISON OF THE NETWORKS FOR PROPOSED METHODS
WITH BPL AND TDBPL ALGORITHMS
MLP
RBF
BPL (sec.)
TDBPL (sec.)
312.7
261.3
165.1
145.3
VII. CONCLUSIONS
In this paper we presented a new algorithm called TDBP
learning, and ANNs with the learning coefficients for
forecasting the real case study traffic flow. The results show
that in the same situation TDBP learning algorithm is
approximately 2.3 times faster than the BPL algorithm. The
performance criteria algorithm's of this are also approximately
1.6 times better than of the BP learning algorithm. In the BP
learning algorithm, the addition of any new pattern will
influence on the weights of all links. The same weights may
be dragged by different learning patterns in different
directions. As indicated in the table 2, training of the network
will require extreme computational time. In the TDBP
learning algorithm, the new pattern affects on the weights of
all links, too. As illustrated in our algorithm, each step of
forecasting has its past content. Therefore, the weights are not
so far from its actual value when they change. Consequently,
the computational time for training the network will decrease.
By comparing the results shown in Fig. 3 and Fig. 4, TDBP
learning algorithm has more efficiency rather than BP learning
algorithm. Using past information differentials and the time
importance of these information for learning provide an added
virtual memory to system which raise our networks
performance.
VIII. REFERENCES
[1]
[2]
Fig. 4: A part extracted from the traffic flow forecasting of station 708 with
TDBPL algorithm
MLP
RBF
MAPE (BPL)
RSME (BPL)
VAPE (BPL)
6.73
5.34
6.31
5.28
6.59
4.60
[3]
[4]
[5]
[6]
[7]
[8]
MLP
RBF
MAPE
(TDBPL)
RSME
(TDBPL)
VAPE
(TDBPL)
4.23
3.13
5.24
4.19
3.71
2.60
[9]
4
[10] H. B. Celikoglu and H. K. Cigizoglu, Modelling public transport trips
by radial basis function neural networks", Mathematical and Computer
Modelling, vol. 45, no. 3-4, pp. 480-489.
[11] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction,
MIT Press, Cambridge, Mass, 1998.