Anda di halaman 1dari 16

Feed Forward Network

Backpropagation Neural Network

S. Vivekanandan

Cabin: TT 319A
E-Mail: svivekanandan@vit.ac.in
Mobile: 8124274447
Back Propagation Network (BPN)
• GE. Hinton, Rumelhart and R. O. Williams in 1986.
• Type of Multilayer feed forward network.
• Uses extend gradient – descent based delta learning rule (Back
propagation rule).
• Gradient descent method – minimizes the total squared error of
the output computed by the net.
• Aim : To train the net to achieve a balance b/w ability to
respond correctly to i/p patterns used for training and ability to
provide good response to i/p that are similar.
Architecture
Architecture

• X- input layer (one)


• Z – hidden layer (one)
• Y – output layer (one)
• Wok – output unit bias
• Vok- hidden unit bias
• Both hidden and output units have bias whose output is 1.
• Signals are sent in both directions.
• May consist of any no. of hidden layers (computational
complexity increases).
Training algorithm
It involves 4 stages :

Initialization of weights – Small random values are assigned.

Feed forward – Each input unit receives an input signal and


transmit to the hidden unit which calculates the activation
function to form the response of the net. The output unit
calculates the activation function to form the response of the net
for the given input pattern.

Back propagation of errors – Each output unit compares its


computed activation with its target value to determine the
associated error for that pattern.

Updation of the weights and biases. – Final stage where the


weights and biases are updated using the error and the activation.
Parameters

• x : Input training vector : x = (x1,….xi,….xn )


• t : Output target vector : t = (t1,…..tk,…..tm )
• δk = error at output unit yk
• δj = error at hidden unit zj
• α = learning rate
• Voj = bias on hidden unit j
• zj = hidden unit j
• wok =bias on output unit
• yk = output unit k
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 7
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 8
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 9
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 10
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 11
Selection of Parameters
Initial Weights = Between -0.5 to 0.5 or -1 to 1
• Too large ; doesn't converges
• Too small ; Extremely slow learning
• It can be done randomly even there is a specific approach
called Nguyen-widrow initialization.
• Faster learning based on geometrical analysis
Selection of Learning rate
• High learning rate leads to rapid learning but weights may
oscillate. While a lower learning rate leads to slower learning.
• Double the learning rate until the error values worsens.

21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 12


Learning in Back Propagation
2 types of learning:
 Sequential learning or pre – pattern method.
A given input is propagated forward, the error is determined and
propagated and the weights are updated.

 Batch learning or pre – epoch method.


The weights are updated only after the entire set of training
network has been presented to the network. i.e. weight update is
only performed after every epoch
Merits

1. The mathematical formula can be applied to any network. No


special mention required.
2. The computing time can be reduced by choosing small
weights at the beginning.
3. The batch update of weights exist, which provides a
smoothing effect on weight correction terms.
Demerits

1. Number of learning steps – high and learning phase also has


intensive calculations.
2. The selection of number of hidden nodes in a network .
3. Long training time resulting in non optimum step size.
4. Trapping of a network in local minima.
5. Training may cause temporal instability to the system.
Applications

Wide area of applications :


 Optical character recognition
 Image compression
 Data compression
 Load forecasting problems in power system area
 Control problems
 Non linear simulation
 Fault detection problems
 Face recognition
 Avionic problems

Anda mungkin juga menyukai