X1 W1
X2
W2 Y
Wn
Xn
Fig(a):-single layer n/w (Adaline)
3
Architecture
The architecture of adaline is shown in fig(a).
The adaline has only one output unit. This o/p
unit receives i/p from several units & also from
bias; whose activation always +1.
In fig(a) an i/p layer with X1... Xi...Xn & bias, an
o/p layer with only one neuron is present.
The link b/w the input & output neurons possess
weighted interconnections . These weights get
changed as the training progresses.
4
The delta rule changes the weight of the
connection to minimize the difference b/w the
net i/p to o/p unit Yin & the target value t.
The delta rule is given by :
Wi = (t-Yin)Xi
Where , X is the vector of activation of i/p units.
Y-in is the net i/p to o/p unit. - X .W1
t is the target vector, - learning rate
The mean square error for a particular
training pattern is
E= (tj-Yinj)
5
Algorithm
Initialize weights (not zero but small random
values are used). Set learning rate.
Set activations of input unit.
Y-in = b + Xi Wi
From delta learning rule.
Wi = (t-Yin)Xi
Update bias & weight i 1 to n.
Wi(new) = Wi(old) + (t-Yin)Xi
b(new) = b(old) + (t-Yin)
Test stopping condition.
E= (tj-Yinj)
6
Finally apply the activations to obtain the o/p
Y.
1, if Y-in 0
Y = f(Yin) =
-1, if Y-in 0
Example:-
1 Developb=0.2
an adaline n/w for
ANDNOT.
X1
W1=0.2
Y
W2=0.2
X2
Fig(b):- ANDNOT
7
Epoch1
X X b t Yin (t- W1 W2 b W1 W2 b E
1 2 Yin)
W12 V1
V2
Y
W21
X2 W22 Z2
b2
Fig(c):- Architecture of Madaline
1 9
MRI Algorithm
Weights of hidden addaline unit are adjustable,
weights of o/p unit are fixed.
V1 & V2 are fixed with bias b3 as 0.5.
The activation function for Z1, Z2 & Y is given
by:
1, if p0
f(p) =
0, if p<0
Other weights may be small random values.
Set activation of i/p.
10
Calculate the i/p of hidden adaline units.
Z-in1 = b1 + X1 W11 + X2 W21
Z-in2 = b2 + X1 W12 + X2 W22
Find o/p of hidden adaline unit (if +Ve=1 or if Ve=
0).
Z1 = f(Z-in1)
Z2 = f(Z-in2)
Calculate net input to output.
Y-in = b3 + Z1 V1 + Z2 V2
Apply activation to get the output of net.
Y = f(Y-in)
11
Find the error and do weight updation.
If t=Y, no weight updation.
If tY, then,
If t= 1, then update weight on Zj unit.
Wij(new) = Wij(old) + (1-Z-inj)Xi
bj(new) = bj(old) + (1-Z-inj)
If t=-1, then update weights on Zk unit.
Wik(new) = Wik(old) + (-1-Z-ink)Xi
bk(new) = bk(old) + (-1-Z-ink)
Test for the stopping condition.
12
MRII Algorithm
This algorithm proposed by Widrow,
Winter & Banter in 1987.
In this method updating all the weights
in the net.
This algorithm different from MRI
algorithm in the manner of weight
updation only.
Initial weights (all weights to some
random value) set of learning rate `.
13
Applications
Useful in noise correction
Adaline in every modem.
Adaline has better convergence
properties than Perceptron
14