Anda di halaman 1dari 2

Illustration of Back-propagation Learning of an ANN 1.

Algorithm Step 1: Normalize (in the range 0 to 1) the inputs and outputs with respect to their maximum values. Assume that there are l inputs and n outputs. Step 2: Consider m number of hidden neurons in the network. The value of m is usually l m 2 l. Step 3: [V], a matrix of l x m represents the synaptic weights between input layer and hidden layer. [W], a matrix of m x n, is the wt. matrix between hidden layer and output layer. Initialize their values with small random numbers usually between -1 to +1. Step 4: Present first set of input [II] and target output [T]. For the first layer, take identity activation function such that [OI] = [II] OI = output of first (input) layer (size l x1) II = input of first layer Step 5: Compute the inputs to the hidden layer by multiplying corresponding weights of synapses as [IH] = [V]T [OI] Step 6: The output of hidden units is calculated using sigmoidal activation function. [OH] = 1 /(1+exp (-IH)) Step 7: Compute input the output layer as [IO] = [W]T [OH] Step 8: The output of the output is calculated using sigmoidal function [OO] = 1 /(1+exp (-IO)) Step 9: Error calculation is done for the iteration (say tth iteration) which is Et , which can be root mean square error. Step 10: Find error gradient [] which is a n x 1 matrix, at output unit as Step 11: Find [Y] as [Y] = [OH] [[] k = Ok(1-Ok) (Tk Ok)

Step 12: Find [W]t+1 = [W]t + [Y] Step 13: Find [eapprox] = [W] [] Step 14: Find [X] as [X] = [OI] [[h] also [h] as hi = eiapprox (OHi) (1-OHi)

Step 15: Find [V]t+1 = [V]t + [X] Step 16: Find [W]t+1 = [W]t + [W]t+1 [V]t+1 = [V]t + [V]t+1

Step 17: Repeat above steps until error is less than the tolerance. Once the process converges, the weight matrices are saved, which are the finding of the algorithm. For new input data, output is obtained by using these weight matrices. Hence, in error-back propagation mechanism there are two kind of propagation of information. The input signal is propagated up to the end output and the error computed at the end is back propagated to the input side. 2. Example Problem:

Consider that for a particular problem there are five training sets as below in table.
SN 1 2 3 4 5 Inputs I1 0.4 0.3 0.6 0.2 0.1 I2 -0.7 -0.5 0.1 0.4 -0.2 Output O 0.1 0.05 0.3 0.25 0.12

0.1 0.4 -0.2 0.4 -0.7 Input layer


Initially, [OI] = {0.4; -0.7} Initialize weight as [V]0 = [0.1 0.4; -0.2 0.2] [W]0 = [0.2; -0.5] [IH] = [V]T [OI] = {0.18; 0.02} [OH]= {0.5448; 0.505} [IO] = [W]T [OH] [OO] = 0.4642 Squared error E1 = (T Oo)2 = 0.13264 Weight adjustments O = (T Oo) Oo (1 Oo) = -0.09058 [Y] = [OH] O = {-0.0493; -0.0457} = = -0.14354

0.2 T = 0.1 - 0.5

0.2

Hidden layer

Output layer

[W]1 = [W]0 + [Y] eapprox = [W] [o]

{-0.02958; -0.02742} Here =0.6 is taken.

= {-0.018116; 0.04529}

[h] as hi = eiapprox (OHi) (1-OHi) = {-0.00449; 0.01132} [X] = [OI] [[h] = {-0.001796 0.004528; 0.003143 -0.007924}

[V]1 = [V]0 + [X] = [0.001077 0.002716; 0.001885 -0.004754] Then [W]1 = [W]0 + [W]1 = [0.17042; -0.52742] [V]1 = [V]0 + [V]1 = [-0.001077 0.002716; 0.002716 -0.004754] Again take one forward epoch and find error, if error is not acceptable go on updating.

Anda mungkin juga menyukai