IMPLEMENTATION OF
LEARNING VECTOR QUANTIZATION (LVQ)
USING MATLAB
BURAK ECE
PROJECT ADVISOR
Asst.Prof.Dr. Hasan Hseyin elik
Learning Vector Quantization (LVQ) Neural Networks
Architecture
An LVQ network has a first competitive layer and a second linear layer. The competitive
layer learns to classify input vectors in much the same way as the competitive layers
of Cluster with Self-Organizing Map Neural Network . The linear layer transforms the
competitive layer's classes into target classifications defined by the user. The classes
learned by the competitive layer are referred to as subclasses and the classes of the
linear layer as target classes.
Both the competitive and linear layers have one neuron per (sub or target) class. Thus,
the competitive layer can learn up to S1 subclasses. These, in turn, are combined by
the linear layer to form S2 target classes. (S1 is always larger than S2.)
For example, suppose neurons 1, 2, and 3 in the competitive layer all learn subclasses
of the input space that belongs to the linear layer target class 2. Then competitive
neurons 1, 2, and 3 will have LW2,1 weights of 1.0 to neuron n2 in the linear layer, and
weights of 0 to all other linear neurons. Thus, the linear neuron produces a 1 if any of
the three competitive neurons (1, 2, or 3) wins the competition and outputs a 1. This is
how the subclasses of the competitive layer are combined into target classes in the
linear layer.
In short, a 1 in the ith row of a1 (the rest to the elements of a1 will be zero) effectively
picks the ith column of LW2,1 as the network output. Each such column contains a
single 1, corresponding to a specific class. Thus, subclass 1s from layer 1 are put into
various classes by the LW2,1a1 multiplication in layer 2.
You know ahead of time what fraction of the layer 1 neurons should be classified into
the various class outputs of layer 2, so you can specify the elements of LW2,1 at the
start. However, you have to go through a training procedure to get the first layer to
produce the correct subclass output for each vector of the training set. First, consider
how to create the original network.
Creating an LVQ Network
Each target vector has a single 1. The rest of its elements are 0. The 1 tells the proper
classification of the associated input. For instance, consider the following training pair.
Here there are input vectors of three elements, and each input vector is to be assigned
to one of four classes. The network is to be trained so that it classifies the input vector
shown above into the third of four classes.
To train the network, an input vector p is presented, and the distance from p to each
row of the input weight matrix IW1,1 is computed with the function negdist. The hidden
neurons of layer 1 compete. Suppose that the ith element of n1 is most positive, and
neuron i* wins the competition. Then the competitive transfer function produces a 1 as
the i*th element of a1. All other elements of a1 are 0.
When a1 is multiplied by the layer 2 weights LW2,1, the single 1 in a1 selects the class k*
associated with the input. Thus, the network has assigned the input vector p to class k*
and 2k* will be 1. Of course, this assignment can be a good one or a bad one, for tk* can
be 1 or 0, depending on whether the input belonged to class k* or not.
Adjust the i*th row of IW1,1 in such a way as to move this row closer to the input
vector p if the assignment is correct, and to move the row away from p if the
assignment is incorrect. If p is classified correctly,
You can make these corrections to the i*th row of IW1,1 automatically, without affecting
other rows of IW1,1, by back-propagating the output errors to layer 1.
Such corrections move the hidden neuron toward vectors that fall into the class for
which it forms a subclass, and away from vectors that fall into other classes.
The learning function that implements these changes in the layer 1 weights in LVQ
networks is learnlv1. It can be applied during training.
Training
Next you need to train the network to obtain first-layer weights that lead to the correct
classification of input vectors. You do this with train as with the following commands.
First, set the training epochs to 150. Then, use train:
net.trainParam.epochs = 150;
net = train(net,P,T);
Now confirm the first-layer weights.
net.IW{1,1}
ans =
0.3283 0.0051
-0.1366 0.0001
-0.0263 0.2234
0 -0.0685
The following plot shows that these weights have moved toward their respective
classification groups.
To confirm that these weights do indeed lead to the correct classification, take the
matrix P as input and simulate the network. Then see what classifications are produced
by the network.
Y = net(P);
Yc = vec2ind(Y)
This gives
Yc = 1 1 1 2 2 2 2 1 1 1
which is expected. As a last check, try an input close to a vector that was used in
training.
pchk1 = [0; 0.5];
Y = net(pchk1);
Yc1 = vec2ind(Y)
This gives
Yc1 = 2
This looks right, because pchk1 is close to other vectors classified as 2. Similarly,
pchk2 = [1; 0];
Y = net(pchk2);
Yc2 = vec2ind(Y)
gives
Yc2 = 1
This looks right too, because pchk2 is close to other vectors classified as 1.
MATLAB CODE
x = [-3 -2 -2 0 0 0 0 +2 +2 +3;
0 +1 -1 +2 +1 -1 -2 +1 -1 0];
c = [1 1 1 2 2 2 2 1 1 1];
t = ind2vec(c);
colormap(hsv);
plotvec(x,c)
title('Input Vectors');
xlabel('x(1)');
ylabel('x(2)');
net = lvqnet(4,0.1);
net = configure(net,x,t);
hold on
w1 = net.IW{1};
plot(w1(1,1),w1(1,2),'ow')
title('Input/Weight Vectors');
xlabel('x(1), w(1)');
ylabel('x(2), w(2)');
net.trainParam.epochs=150;
net=train(net,x,t);
cla;
plotvec(x,c);
hold on;
plotvec(net.IW{1}',vec2ind(net.LW{2}),'o');
x1 = [0.2; 1];
y1 = vec2ind(net(x1));