Anda di halaman 1dari 5

Computational Intelligence Topic Hebbian Learning+Coding

Assignment#1
Presented to
Assistant. Professor .Dr.Tehseen Jeelani
Karachi University
Institute of Business Management

1|Page
Adnan Alam Khan Std_18090@iobm.edu.pk

Computational Intelligence Topic Hebbian Learning+Coding


Background
The main property of a neural network is an ability to learn from its environment, and to
improve its performance through learning. So far we have considered supervised or active
learning - learning with an external teacher or a supervisor who presents a training set to the
network. But another type of learning also exists: unsupervised learning.
In contrast to supervised learning, unsupervised or self-organised learning does not require an
external teacher. During the training session, the neural network receives a number of different
input patterns, discovers significant features in these patterns and learns how to classify input
data into appropriate categories. Unsupervised
learning tends to follow the neuro-biological
organisation of the brain.Unsupervised learning algorithms aim to learn rapidly and can be used
in real-time.
Hebbs Law can be represented in the form of two rules:
If two neurons on either side of a connection are activated synchronously, then the weight
of that connection is increased.
If two neurons on either side of a connection are activated asynchronously, then the weight
of that connection is decreased.
Hebbs Law provides the basis for learning
without a teacher. Learning here is a local
phenomenon occurring without feedback from the environment.
Introduction
In neural networks, learning is achieved mostly (but not exclusively) through changes in the
strengths of the connections between neurons. Mechanisms of learning include:
1. Changes in neural parameters (threshold, time constants)
2. Creation of new synapses
3. Elimination of synapses
4. Changes in the synaptic weights or connection strengths
One common way to calculate changes in connection strengths in a neural network is the so
called hebbian learning rule, in which a change in the strength of a connection is a function of
the pre and postsynaptic neural activities. It is called the hebbian learning rule after D. Hebb
(When neuron A repeatedly participates in firing neuron B, the strength of the action of A onto
B increases.
The neuropsychologist Donald Hebb postulated in 1949 how biological neurons learn:
When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes
part in firing it, some growth process or metabolic change takes place on one or both cells such
that As efficiency as one of the cells firing B, is increased.
In more familiar terminology, that can be stated as the Hebbian Learning rule:
1. If two neurons on either side of a synapse (connection) are activated simultaneously (i.e.
synchronously), then the strength of that synapse is selectively increased. Then, in the notation
used for Perceptrons, that can be written as the weight update:
wij =.out j .ini
If xj is the output of the presynaptic neuron, xi the output of the postsynaptic neuron, and wij
the strength of the connection between them, and or learning rate or activity product, the
one form of a learning rule would be:
Wij (t) = xj*xi
A more general form of a hebbian learning rule would be:
Wij (t) = F(xj, xi, , t, )
in which time and learning thresholds can be taken into account.
There is strong physiological evidence that this type of learning does take place in the region of
the brain known as the hippocampus.

2|Page
Adnan Alam Khan Std_18090@iobm.edu.pk

Computational Intelligence Topic Hebbian Learning+Coding


An obvious problem with the above rule is that it is unstable chance coincidences will build up
the connection strengths, and all the weights will tend to increase indefinitely. Consequently,
the basic learning rule (1) is often supplemented by:
2. If two neurons on either side of a synapse are activated asynchronously, then that synapse is
selectively weakened or eliminated. Another way to stop the weights increasing indefinitely
involves normalizing them so they are constrained to lie between 0 and 1. This is preserved by
the weight update
wij =

+..

( +.. )

which, using a small and linear neuron approximation, leads to Ojas Learning Rule
wij =.out j .ini .out j .wij .out j
which is a useful stable form of Hebbian Learning.
Hebbian base Principle Component Analysis
- Single neuron use APN mechanism of learning
- Synaptic weight must be updated
Neurons uses Eigen value which is the highest value of a q1 vector (first principle component)
called APN learning using data reduction.
J is the single layer of neuron whereas dimension Lm

X1

Y1

X2
X3

Y2

Xm

Yn

Linear X neuron Model:


Yj(n)=
(1)
=1 () () whereas Yj is neurons and n is the iteration no and j = 1,2,3..L
Generalized Hebbian Algorithm:
This algorithm is based on change of weight or change in synaptic weight, its mathematical relationship is as follows.

wij(n)= yi(n)[Xi(n)-= () ()]

(2)

Whereas i= 1,2,3..m and j=1,2,3..l


Modified input term:

wij(n)= yi(n)[Xi(n)- () ()]


Whereas i= 1,2,3..m and j=1,2,3..l

(3)

Xi(n)=Xi(n)-=1 () ()

(4)

Now rewrite the equation 3 ,the new equation is as follows

wij(n)= yi(n)Xi(n);
Xi(n)=Xi(n)-Wji(n)Yj(n) This is known as generalized Hebbian learning.

3|Page
Adnan Alam Khan Std_18090@iobm.edu.pk

Computational Intelligence Topic Hebbian Learning+Coding


Signal Flow Graph in Hebbian Learning
xi(n)
-y1(n)
W1i(n)
W2i(n)

Wj-1,i(n)

Xi(n)
-yi(n)

Wji(n+1)

Wji(n)

-1

Aforementioned diagram depicts signal flow, decision means concluding result without supervision so the
new equation becomes.
Wji(n+1)- Wji(n) Extracting principle component 1,2,3
Equation 3 in the Matrix form

The equation 3 is as follows:


wij(n)= yi(n)[Xi(n)- () ()]
wj(n)= yi(n)[Xi(n)- () ()]
1
where as j=1,2,3..L and X(n)=(n) - =1
. ()

(3)

(5)

Observations from equation 5


1. For the first neuron j=1

w1(n)= y1(n)[Xi(n)- () ()]

2.

3.

( + 1)
() = ()
Increment n to n+1
Taking negative weight to right hand side
( + 1) =
() + ()[
() ()
()]

Which is called as Oja Result of the component ().Aforementioned equation depicts first principle component.
For the second neuron
() 1 ()
1 ()]
J=2:
() = [

() is the input vector from where the first principle component has been eliminated. Input vector from which the first
eigen vector of the correlation matrix has been removed. This neuron will extract the first principle component of

(); which is equivalent to the second principle component of the original vector ()
For the third neuron first two term go away from the 3rd neuron j=3;
() = ()
1 (). 1 ()
2 (). 2 ()
This will extract the 3rd principle component w-r-t original.

X1
X2
X3
Xm

Proved the derivation

4|Page
Adnan Alam Khan Std_18090@iobm.edu.pk

Computational Intelligence Topic Hebbian Learning+Coding


Practical Demonstration in MATLAB

Result:

Here you can see the Hebb learning classify two dimension input pattern

5|Page
Adnan Alam Khan Std_18090@iobm.edu.pk

Anda mungkin juga menyukai