Training a deep autoencoder or a classifier on MNIST digits_之调试运行与理解
来源:互联网 发布:程序员杂志订阅 编辑:程序博客网 时间:2024/06/14 15:22
运行这个程序的主要目的:深入理解deep autoencoder 的基本原理和基本架构,搞明白是如何搭建起来的,弄清它是如何训练学习的,又是如何提取目标的特征的,最终又是怎样分类的。
代码主程序如下:
mnistdeepauto.m
- <pre code_snippet_id="148729" snippet_file_name="blog_20140109_1_7166930" name="code" class="plain"><pre code_snippet_id="148729" snippet_file_name="blog_20140109_1_7166930" name="code" class="plain">% Version 1.000
- %
- % Code provided by Ruslan Salakhutdinov and Geoff Hinton
- %
- % Permission is granted for anyone to copy, use, modify, or distribute this
- % program and accompanying programs and documents for any purpose, provided
- % this copyright notice is retained and prominently displayed, along with
- % a note saying that the original programs are available from our
- % web page.
- % The programs and documents are distributed without any warranty, express or
- % implied. As the programs were written for research purposes only, they have
- % not been tested to the degree that would be advisable in any important
- % application. All use of these programs is entirely at the user's own risk.
- % This program pretrains a deep autoencoder for MNIST dataset-
- % 这个程序是关于MNIST数据库的深度自编码预训练
- % You can set the maximum number of epochs for pretraining each layer
- %在预训练各个隐藏层的时候,你可以设置epochs的最大值
- % and you can set the architecture of the multilayer net.
- %你可以设置多层网络的架构
- clear all %清除工作所有的变量
- close all %关闭其他的窗口
- maxepoch=10; %In the Science paper we use maxepoch=50, but it works just fine.
- numhid=1000; numpen=500; numpen2=250; numopen=30;</pre><pre code_snippet_id="148729" snippet_file_name="blog_20140109_2_7730833" name="code" class="plain">%设置各个隐藏层的神经元的个数;这个程序所采用的网络共有四层,你可以追踪这个变量,就很容易知道这四个变量的代表的含义。【1000 500 250 30】
- fprintf(1,'Converting Raw files into Matlab format \n');
- converter; </pre><pre code_snippet_id="148729" snippet_file_name="blog_20140109_3_2856028" name="code" class="plain">%作者提供的二进制数据需要转换Matlab格式的;当程序运行到第一个函数fread时,程序报错,提示说所产生的文件标志是错误的,建议采用fopen函数。
- fprintf(1,'Pretraining a deep autoencoder. \n');
- fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch);
- makebatches;
- [numcases numdims numbatches]=size(batchdata);
- fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid);
- restart=1;
- rbm;
- hidrecbiases=hidbiases;
- save mnistvh vishid hidrecbiases visbiases;
- fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen);
- batchdata=batchposhidprobs;
- numhid=numpen;
- restart=1;
- rbm;
- hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases;
- save mnisthp hidpen penrecbiases hidgenbiases;
- fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2);
- batchdata=batchposhidprobs;
- numhid=numpen2;
- restart=1;
- rbm;
- hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases;
- save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2;
- fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen);
- batchdata=batchposhidprobs;
- numhid=numopen;
- restart=1;
- rbmhidlinear;
- hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases;
- save mnistpo hidtop toprecbiases topgenbiases;
- backprop;
- </pre><br>
- <br>
- <pre></pre>
- <pre></pre>
- <pre></pre>
- <pre></pre>
- </pre>
原文地址:http://blog.csdn.net/liyuanhao_1114/article/details/18033223
0 0
- Training a deep autoencoder or a classifier on MNIST digits_之调试运行与理解
- Training a deep autoencoder or a classifier on MNIST digits_之调试运行与理解
- Training a deep autoencoder or a classifier on MNIST digits_Rbm训练(python)
- Training a deep autoencoder or a classifier on MNIST digits_Rbm训练(Matlab)
- Training a deep autoencoder or a classifier on MNIST digits_Rbm训练(Matlab)
- Geoffrey E. Hinton的Deep Learning代码-- Training a deep autoencoder or a classifier on MNIST digits
- A Simple Deep Network:sparse autoencoder and softmax regression
- Deep discussion on method of judging a system whether a little endian or big endian
- 阅读"Semi-supervised Training of a Voice Conversion Mapping Function using a Joint-Autoencoder"
- Training MNIST LeNet on MNIST with Caffe
- 余凯 A Tutorial on Deep Learning
- A shallow understanding on deep learning
- A shallow understanding on deep learning
- A shallow understanding on deep learning
- 图像增强:LLNet: A Deep Autoencoder approach to Natural Low-light Image Enhancement介绍
- Deep learning:四十二(Denoise Autoencoder简单理解)
- Deep learning:四十八(Contractive AutoEncoder简单理解)
- Choosing a Machine Learning Classifier
- ZZULIOJ 1726: 迷宫
- Java中变长参数的小例子
- Positional parameter are considered deprecated; use named parameters or JPA-style positional parame
- Android中使用Volley上传文件的源码
- LeetCode, Median of Two Sorted Arrays, Java Solution, O(m+n), O(log(m+n))
- Training a deep autoencoder or a classifier on MNIST digits_之调试运行与理解
- 黑马程序员——IO流2:字符流
- 最大公约数问题
- MFC控件移动
- 要读的专业书籍 2015.5.27更新
- ACM--dyx--steps--5.1.3--Is It A Tree?
- 编程之美2015资格赛 C 基站选址 (数学)
- chrome浏览器频频崩溃,如何解决?
- Android 关于NDK Clang3.4 编译可执行文件无法启动的问题