deep learning---SAE(stacked autoencoder)

来源:互联网 发布:网络凶杀2 编辑:程序博客网 时间:2024/06/06 18:42

SAE栈式自编码器参考自网页http://ufldl.stanford.edu/wiki/index.php/Stacked_Autoencoders点击打开链接

A stacked autoencoder is a neural network consisting of multiple layers of sparse autoencoders in which the outputs of each layer is wired to the inputs of the successive layer. 就是把前一层自编码器的中间的隐藏层(特征1)作为后一层自编码器的输入,再将得到的隐含层(特征2 )作为下一层的输入,如此重复,最后将得到的特征作为输入集输入到softmax classifier(或者其他分类器)中训练。然后整个网络训练完之后,将各个步骤得到的特征矩阵与分类器的参数合成新的网络。 (大概意思,仅供参考)

可以看下面这个例子增强理解

具体的例子

训练2个隐含层的MNIST 数字分类

First, you would train asparse autoencoderon the raw inputs x(k) to learn primary features h(1)(k) on the raw input.

Next, you would feed the raw input into this trained sparse autoencoder, obtaining the primary feature activations h(1)(k) for each of the inputs x(k). You would then use these primary features as the "raw input" to another sparse autoencoder to learn secondary features h(2)(k) on these primary features

Following this, you would feed the primary features into the second sparse autoencoder to obtain the secondary feature activations h(2)(k) for each of the primary features h(1)(k) (which correspond to the primary features of the corresponding inputs x(k)). You would then treat these secondary features as "raw input" to a softmax classifier, training it to map secondary features to digit labels.


Finally, you would combine all three layers together to form a stacked autoencoder with 2 hidden layers and a final softmax classifier layer capable of classifying the MNIST digits as desired。

组成新的网络

大致实验步骤:

  1. 初始化参数;
  2. 在原数据上训练第一个自编码器,然后算出L1 features;
  3. 在L1 features上训练第二个自编码器,然后算出L2 features;
  4. 在L2 features上训练softmax分类器;
  5. stacked autocoders+softmax模型,用BP算法微调参数;
  6. 测试模型

 栈式自编码具有很强大的表达能力及深度网络的所有优点, 自编码器倾向于学习到数据的特征表示 对于栈式自编码器,第一层可以学习到一阶特征,第二层可以学到二阶特征等等,对于图像而言,第一层可能学习到边,第二层可能学习到如何去组合边形成轮廓、点,更高层可能学习到更形象且更有意义的特征,学到的特征方便我们更好地处理图像,比如对图像分类、检索等等。

0 0
原创粉丝点击