Anda di halaman 1dari 14

1

Developed in 1960 by Widrow & Hof.


That is very closely related to the perceptron
learning rule. The rule called Delta rule.
Adjusts the weights to reduce the difference
b/w the net i/p to the o/p unit, and the desired
o/p, which results in a least mean square error.
Adaline (Adaptive Linear Neuron) & Madaline
(Multilayered Adaline) networks use this LMS
learning rule & are applied to various neural n/w
applications.
The weights on the interconnections b/w the
adaline & madaline networks are adjustable.
2
Adaline
It is found to use bipolar activations for its
input signals and target output (+1 or -1).
The learning rule can be called as Delta
rule, Least Mean square rule or Widrow-Hof
rule. 1
b

X1 W1

X2
W2 Y

Wn

Xn
Fig(a):-single layer n/w (Adaline)
3
Architecture
The architecture of adaline is shown in fig(a).
The adaline has only one output unit. This o/p
unit receives i/p from several units & also from
bias; whose activation always +1.
In fig(a) an i/p layer with X1... Xi...Xn & bias, an
o/p layer with only one neuron is present.
The link b/w the input & output neurons possess
weighted interconnections . These weights get
changed as the training progresses.

4
The delta rule changes the weight of the
connection to minimize the difference b/w the
net i/p to o/p unit Yin & the target value t.
The delta rule is given by :
Wi = (t-Yin)Xi
Where , X is the vector of activation of i/p units.
Y-in is the net i/p to o/p unit. - X .W1
t is the target vector, - learning rate
The mean square error for a particular
training pattern is
E= (tj-Yinj)
5
Algorithm
Initialize weights (not zero but small random
values are used). Set learning rate.
Set activations of input unit.
Y-in = b + Xi Wi
From delta learning rule.
Wi = (t-Yin)Xi
Update bias & weight i 1 to n.
Wi(new) = Wi(old) + (t-Yin)Xi
b(new) = b(old) + (t-Yin)
Test stopping condition.
E= (tj-Yinj)
6
Finally apply the activations to obtain the o/p
Y.
1, if Y-in 0
Y = f(Yin) =
-1, if Y-in 0
Example:-
1 Developb=0.2
an adaline n/w for
ANDNOT.

X1
W1=0.2
Y
W2=0.2

X2

Fig(b):- ANDNOT
7
Epoch1
X X b t Yin (t- W1 W2 b W1 W2 b E
1 2 Yin)

1 1 1 -1 0.6 -1.6 -0.32 -0.32 -0.32 -0.12 -0.12 -0.12 2.56

1 -1 1 1 -0.12 1.12 0.22 -0.22 0.22 0.10 -0.34 0.10 1.25

-1 1 1 -1 -0.34 -0.66 0.13 -0.13 -0.13 0.23 -0.47 -0.03 0.44

-1 -1 1 -1 -0.21 -1.21 0.24 0.24 -0.24 0.47 -0.23 -0.27 1.46

After Epoch 6 we get W1 = 0.5, W2 = -0.5, b =


-0.5
By using above weight, the LMS error is calculated.
E=(tj-Yinj) 8
Madaline
Madaline is the combination of adaline.
It is also called multilayered adaline.
Madaline has two training algorithm, MRI &
MRII.
1 b1
1
Architecture:-
X1 W 11
Z1
b3

W12 V1

V2
Y
W21

X2 W22 Z2
b2
Fig(c):- Architecture of Madaline
1 9
MRI Algorithm
Weights of hidden addaline unit are adjustable,
weights of o/p unit are fixed.
V1 & V2 are fixed with bias b3 as 0.5.
The activation function for Z1, Z2 & Y is given
by:
1, if p0
f(p) =
0, if p<0
Other weights may be small random values.
Set activation of i/p.
10
Calculate the i/p of hidden adaline units.
Z-in1 = b1 + X1 W11 + X2 W21
Z-in2 = b2 + X1 W12 + X2 W22
Find o/p of hidden adaline unit (if +Ve=1 or if Ve=
0).
Z1 = f(Z-in1)
Z2 = f(Z-in2)
Calculate net input to output.
Y-in = b3 + Z1 V1 + Z2 V2
Apply activation to get the output of net.
Y = f(Y-in)

11
Find the error and do weight updation.
If t=Y, no weight updation.
If tY, then,
If t= 1, then update weight on Zj unit.
Wij(new) = Wij(old) + (1-Z-inj)Xi
bj(new) = bj(old) + (1-Z-inj)
If t=-1, then update weights on Zk unit.
Wik(new) = Wik(old) + (-1-Z-ink)Xi
bk(new) = bk(old) + (-1-Z-ink)
Test for the stopping condition.

12
MRII Algorithm
This algorithm proposed by Widrow,
Winter & Banter in 1987.
In this method updating all the weights
in the net.
This algorithm different from MRI
algorithm in the manner of weight
updation only.
Initial weights (all weights to some
random value) set of learning rate `.
13
Applications
Useful in noise correction
Adaline in every modem.
Adaline has better convergence
properties than Perceptron

14

Anda mungkin juga menyukai