Matlab LVQ1学习算法
来源:互联网 发布:e筋翻样软件下载 编辑:程序博客网 时间:2024/05/21 17:37
Learning Vector Quantization Networks
Architecture
The LVQ network architecture is shown below.
An LVQ network has a first competitive layer and a second linear layer. The competitive layer learns to classify input vectors in much the same way as the competitive layers ofSelf-Organizing Feature Maps described in this chapter. The linear layer transforms the competitive layer's classes into target classifications defined by the user. The classes learned by the competitive layer are referred to as subclasses and the classes of the linear layer astarget classes.
Both the competitive and linear layers have one neuron per (sub or target) class. Thus, the competitive layer can learn up to S1 subclasses. These, in turn, are combined by the linear layer to form S2 target classes. (S1 is always larger than S2.)
For example, suppose neurons 1, 2, and 3 in the competitive layer all learn subclasses of the input space that belongs to the linear layer target class 2. Then competitive neurons 1, 2, and 3 will haveLW2,1 weights of 1.0 to neuron n2 in the linear layer, and weights of 0 to all other linear neurons. Thus, the linear neuron produces a 1 if any of the three competitive neurons (1, 2, or 3) wins the competition and outputs a 1. This is how the subclasses of the competitive layer are combined into target classes in the linear layer.
In short, a 1 in the ith row ofa1 (the rest to the elements of a1 will be zero) effectively picks theith column of LW2,1 as the network output. Each such column contains a single 1, corresponding to a specific class. Thus, subclass 1s from layer 1 are put into various classes by theLW2,1a1 multiplication in layer 2.
You know ahead of time what fraction of the layer 1 neurons should be classified into the various class outputs of layer 2, so you can specify the elements ofLW2,1 at the start. However, you have to go through a training procedure to get the first layer to produce the correct subclass output for each vector of the training set. This training is discussed inTraining. First, consider how to create the original network.
Back to Top
Creating an LVQ Network
You can create an LVQ network with the functionnewlvq,
net = newlvq(PR,S1,PC,LR,LF)
where
PR is an R-by-2 matrix of minimum and maximum values for R input elements.
S1 is the number of first-layer hidden neurons.
PC is an S2-element vector of typical class percentages.
LR is the learning rate (default 0.01).
LF is the learning function (default is learnlv1).
Suppose you have 10 input vectors. Create a network that assigns each of these input vectors to one of four subclasses. Thus, there are four neurons in the first competitive layer. These subclasses are then assigned to one of two output classes by the two neurons in layer 2. The input vectors and targets are specified by
P = [-3 -2 -2 0 0 0 0 2 2 3; 0 1 -1 2 1 -1 -2 1 -1 0];
and
Tc = [1 1 1 2 2 2 2 1 1 1];
It might help to show the details of what you get from these two lines of code.
P = -3 -2 -2 0 0 0 0 2 2 3 0 1 -1 2 1 -1 -2 1 -1 0Tc = 1 1 1 2 2 2 2 1 1 1
A plot of the input vectors follows.
As you can see, there are four subclasses of input vectors. You want a network that classifiesp1, p2, p3,p8, p9, and p10to produce an output of 1, and that classifies vectors p4,p5, p6, and p7 to produce an output of 2. Note that this problem is nonlinearly separable, and so cannot be solved by a perceptron, but an LVQ network has no difficulty.
Next convert the Tc matrix to target vectors.
T = ind2vec(Tc);
This gives a sparse matrix T that can be displayed in full with
targets = full(T)
which gives
targets = 1 1 1 0 0 0 0 1 1 1 0 0 0 1 1 1 1 0 0 0
This looks right. It says, for instance, that if you have the first column of P as input, you should get the first column of targets as an output; and that output says the input falls in class 1, which is correct. Now you are ready to callnewlvq.
Call newlvq with the proper arguments so that it creates a network with four neurons in the first layer and two neurons in the second layer. The first-layer weights are initialized to the centers of the input ranges with the functionmidpoint. The second-layer weights have 60% (6 of the 10 inTc above) of its columns with a 1 in the first row, (corresponding to class 1), and 40% of its columns will have a 1 in the second row (corresponding to class 2).
net = newlvq(P,4,[.6 .4]);
Confirm the initial values of the first-layer weight matrix.
net.IW{1,1}ans = 0 0 0 0 0 0 0 0
These zero weights are indeed the values at the midpoint of the ranges (−3 to +3) of the inputs, as you would expect when usingmidpoint for initialization.
You can look at the second-layer weights with
net.LW{2,1}ans = 1 1 0 0 0 0 1 1
This makes sense too. It says that if the competitive layer produces a 1 as the first or second element, the input vector is classified as class 1; otherwise it is a class 2.
You might notice that the first two competitive neurons are connected to the first linear neuron (with weights of 1), while the second two competitive neurons are connected to the second linear neuron. All other weights between the competitive neurons and linear neurons have values of 0. Thus, each of the two target classes (the linear neurons) is, in fact, theunion of two subclasses (the competitive neurons).
You can simulate the network with sim. Use the original P matrix as input just to see what you get.
Y = sim(net,P);Yc = vec2ind(Y)Yc = 1 1 1 1 1 1 1 1 1 1
The network classifies all inputs into class 1. Because this is not what you want, you have to train the network (adjusting the weights of layer 1 only), before you can expect a good result. The next two sections discuss two LVQ learning rules and the training process.
Back to Top
LVQ1 Learning Rule (learnlv1)
LVQ learning in the competitive layer is based on a set of input/target pairs.
Each target vector has a single 1. The rest of its elements are 0. The 1 tells the proper classification of the associated input. For instance, consider the following training pair.
Here there are input vectors of three elements, and each input vector is to be assigned to one of four classes. The network is to be trained so that it classifies the input vector shown above into the third of four classes.
To train the network, an input vector p is presented, and the distance fromp to each row of the input weight matrix IW1,1 is computed with the functionnegdist. The hidden neurons of layer 1 compete. Suppose that theith element of n1 is most positive, and neuroni* wins the competition. Then the competitive transfer function produces a 1 as thei*th element of a1. All other elements of a1 are 0.
When a1 is multiplied by the layer 2 weights LW2,1, the single 1 in a1 selects the classk* associated with the input. Thus, the network has assigned the input vectorp to class k* and α2k* will be 1. Of course, this assignment can be a good one or a bad one, fortk* can be 1 or 0, depending on whether the input belonged to classk* or not.
Adjust the i*th row of IW1,1 in such a way as to move this row closer to the input vectorp if the assignment is correct, and to move the row away from p if the assignment is incorrect. If p is classified correctly,
compute the new value of the i*th row of IW1,1 as
On the other hand, if p is classified incorrectly,
compute the new value of the i*th row of IW1,1 as
You can make these corrections to the i*th row of IW1,1 automatically, without affecting other rows ofIW1,1, by back-propagating the output errors to layer 1.
Such corrections move the hidden neuron toward vectors that fall into the class for which it forms a subclass, and away from vectors that fall into other classes.
The learning function that implements these changes in the layer 1 weights in LVQ networks islearnlv1. It can be applied during training.
Back to Top
Training
Next you need to train the network to obtain first-layer weights that lead to the correct classification of input vectors. You do this withtrain as with the following commands. First, set the training epochs to 150. Then, usetrain:
net.trainParam.epochs = 150;net = train(net,P,T);
Now confirm the first-layer weights.
net.IW{1,1}ans = 0.3283 0.0051 -0.1366 0.0001 -0.0263 0.2234 0 -0.0685
The following plot shows that these weights have moved toward their respective classification groups.
To confirm that these weights do indeed lead to the correct classification, take the matrixP as input and simulate the network. Then see what classifications are produced by the network.
Y = sim(net,P);Yc = vec2ind(Y)
This gives
Yc = 1 1 1 2 2 2 2 1 1 1
which is expected. As a last check, try an input close to a vector that was used in training.
pchk1 = [0; 0.5];Y = sim(net,pchk1);Yc1 = vec2ind(Y)
This gives
Yc1 = 2
This looks right, because pchk1 is close to other vectors classified as 2. Similarly,
pchk2 = [1; 0];Y = sim(net,pchk2);Yc2 = vec2ind(Y)
gives
Yc2 = 1
This looks right too, because pchk2 is close to other vectors classified as 1.
You might want to try the demonstration program demolvq1. It follows the discussion of training given above.
Back to Top
Supplemental LVQ2.1 Learning Rule (learnlv2)
The following learning rule is one that might be applied after first applying LVQ1. It can improve the result of the first learning. This particular version of LVQ2 (referred to as LVQ2.1 in the literature [Koho97]) is embodied in the functionlearnlv2. Note again that LVQ2.1 is to be used only after LVQ1 has been applied.
Learning here is similar to that in learnlv2 except now two vectors of layer 1 that are closest to the input vector can be updated, provided that one belongs to the correct class and one belongs to a wrong class, and further provided that the input falls into a "window" near the midplane of the two vectors.
The window is defined by
where
(where di and dj are the Euclidean distances of p fromi*IW1,1 and j*IW1,1, respectively). Take a value forw in the range 0.2 to 0.3. If you pick, for instance, 0.25, then s = 0.6. This means that if the minimum of the two distance ratios is greater than 0.6, the two vectors are adjusted. That is, if the input is near the midplane, adjust the two vectors, provided also that the input vector p and j*IW1,1belong to the same class, and p and i*IW1,1 do not belong in the same class.
The adjustments made are
and
Thus, given two vectors closest to the input, as long as one belongs to the wrong class and the other to the correct class, and as long as the input falls in a midplane window, the two vectors are adjusted. Such a procedure allows a vector that is just barely classified correctly with LVQ1 to be moved even closer to the input, so the results are more robust.
Function
Description
competlayer
Create a competitive layer.
learnk
Kohonen learning rule.
selforgmap
Create a self-organizing map.
learncon
Conscience bias learning function.
boxdist
Distance between two position vectors.
dist
Euclidean distance weight function.
linkdist
Link distance function.
mandist
Manhattan distance weight function.
gridtop
Gridtop layer topology function.
hextop
Hexagonal layer topology function.
randtop
Random layer topology function.
newlvq
Create a learning vector quantization network.
learnlv1
LVQ1 weight learning function.
learnlv2
LVQ2 weight learning function.
Back to Top
转载:http://www.mathworks.cn/help/toolbox/nnet/ug/bss4b_l-15.html#bss4b_l-18
- Matlab LVQ1学习算法
- MATLAB之并行算法学习
- Matlab遗传算法学习-reclin
- matlab实现感知器学习算法
- 机器学习:KNN算法(MATLAB实现)
- Matlab 机器学习算法函数总结
- 神经网络学习算法matlab应用分析
- 算法学习笔记之matlab安装教程
- Matlab 自带机器学习算法汇总
- matlab 常用机器学习算法的实现
- Matlab实现Bagging(集成学习)算法
- Matlab遗传算法学习-recint.m
- 机器学习 AdaBoost算法的MATLAB实现
- matlab自带机器学习算法
- Matlab自带机器学习算法汇总
- 机器学习中的kNN算法及Matlab实例
- PCA算法学习_2(PCA理论的matlab实现)
- 【学习笔记】matlab算法实现贝叶斯判别classify函数
- 打印杨辉三角形
- Django 1.4 Python 2.7菜鸟入门
- (精)(图论加强)一笔画问题(欧拉回路)
- c++空类中默认生成的成员函数
- 将要开始编程序的处女座vc
- Matlab LVQ1学习算法
- VC下utf8到UNICODE的编码转换
- ED2K
- 排序算法札记(待续)
- 黑客基地VIP会员区培训内容!
- 动态规划算法
- Objective-C语法快速参考
- DateTime to TimeStamp
- 主流的AJAX框架比较