DeepLearning(二) 自编码算法与稀疏性理解与实战
来源:互联网 发布:eval json 编辑:程序博客网 时间:2024/05/29 13:33
在有监督学习中,训练样本是有类别标签的。现在假设我们只有一个没有带类别标签的训练样本集合 {
自编码神经网络是一种无监督学习算法,它使用了反向传播算法,并让目标值等于输入值,比如
自编码神经网络尝试学习一个
确实,自编码神经网络的输入和输出是不可能相等的,但是我们就是要强迫它相等,或者说尽可能地相等。(感觉它好不情愿= =)也就是:I ==> Sn ==> O , 且I要尽可能等于O。(I代表输入,Sn代表每一层中间隐藏层的输出a,O代表网络输出)
这么折腾究竟是要干嘛?要I等于O干嘛,好好的,要O去干嘛就直接拿I去不就好了。着实,O确实不重要,但是我们注意,Sn很重要!
Sn为什么重要?我们看上面的模型,输入I有6维,输出O有6维,中间层Sn呢?只有3维!这看出了什么?PCA?白化?差不多,但又有点不同。可以说是降维了,但PCA做的工作是提取了数据的最重要的成分,而这里的Sn是学习了数据更加本质的结构!为什么是这样?因为我们强迫它学习用3维的数据去表示6维的数据,为了完成这个目标,它不得不去寻找输入数据中存在的一些结构。
所以,中间层学习得到的3维输出Sn,就是深度学习网络学习得到的输入数据的更加本质的特征。如果增加中间层的层数,如下图:
也就是:I–>S1–>S2–>…–>Sn–>O,每一个中间层
哦哦,这里说一下,深度学习本质:深度学习模型是工具,目的是学习到输入数据的特征。
也就是说,我们最后的分类或者识别之类的,还要加个分类器或者其他的东西。
稀疏性限制
刚才的论述是基于隐藏神经元数量较小的假设。但是即使隐藏神经元的数量较大(可能比输入像素的个数还要多),我们仍然通过给自编码神经网络施加一些其他的限制条件来发现输入数据中的结构。具体来说,如果我们给隐藏神经元加入稀疏性限制,那么自编码神经网络即使在隐藏神经元数量较多的情况下仍然可以发现输入数据中一些有趣的结构。
稀疏性可以被简单地解释如下。如果当神经元的输出接近于1的时候我们认为它被激活,而输出接近于0的时候认为它被抑制,那么使得神经元大部分的时间都是被抑制的限制则被称作稀疏性限制。这里我们假设的神经元的激活函数是sigmoid函数。如果你使用tanh作为激活函数的话,当神经元输出为-1的时候,我们认为神经元是被抑制的。
令
其中
然后,我们又要委屈它了,我们加入一个条件:
其中,
何必为难它呢?为什么要让只要少部分中间隐藏神经元的活跃度,也就是输出值大于0,其他的大部分为0.原因就是我们要做的就是模拟我们人脑。神经网络本来就是模型人脑神经元的,深度学习也是。在人脑中有大量的神经元,但是大多数自然图像通过我们视觉进入人脑时,只会刺激到少部分神经元,而大部分神经元都是出于抑制状态的。而且,大多数自然图像,都可以被表示为少量基本元素(面或者线)的叠加。又或者说,这样更加有助于我们用少量的神经元提取出自然图像更加本质的特征。
为了实现这一限制,我们将会在我们的优化目标函数中加入一个额外的惩罚因子,而这一惩罚因子将惩罚那些
其中,
基于相对熵的话,上述惩罚因子也可以表示为:
假设
由上图可以看到,当
Matlab 实战
这里的输入数据是 8x8 的一个小图片,转换为 64x1 的矩阵,总共10000个样本进行训练,学习图片中的特征,其实结果就是图片的边缘。效果如下图:
代码如下:sparseAutoencoderCost.m
<code class="language-matlab hljs has-numbering" style="display: block; padding: 0px; color: inherit; box-sizing: border-box; font-family: 'Source Code Pro', monospace;font-size:undefined; white-space: pre; border-top-left-radius: 0px; border-top-right-radius: 0px; border-bottom-right-radius: 0px; border-bottom-left-radius: 0px; word-wrap: normal; background: transparent;"><span class="hljs-function" style="box-sizing: border-box;"><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">function</span> <span class="hljs-params" style="color: rgb(102, 0, 102); box-sizing: border-box;">[cost,grad]</span> = <span class="hljs-title" style="box-sizing: border-box;">sparseAutoencoderCost</span><span class="hljs-params" style="color: rgb(102, 0, 102); box-sizing: border-box;">(theta, visibleSize, hiddenSize, ... lambda, sparsityParam, beta, data)</span></span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%lambda = 0;</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%beta = 0;</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% visibleSize: the number of input units (probably 64) </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% hiddenSize: the number of hidden units (probably 25) </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% lambda: weight decay parameter</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% sparsityParam: The desired average activation for the hidden units (denoted in the lecture</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% notes by the greek alphabet rho, which looks like a lower-case "p").</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% beta: weight of sparsity penalty term</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example. </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% The input theta is a vector (because minFunc expects the parameters to be a vector). </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% follows the notation convention of the lecture notes. </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 学习率 自己定义的</span>alpha = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0.01</span>;<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 隐藏神经元的个数是 25 = hiddenSize</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 计算隐藏层神经元的激活度</span>p = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">zeros</span>(hiddenSize,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>);<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 25x64</span>W1 = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">reshape</span>(theta(<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:hiddenSize*visibleSize), hiddenSize, visibleSize);<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 64 X 25</span>W2 = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">reshape</span>(theta(hiddenSize*visibleSize+<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>*hiddenSize*visibleSize), visibleSize, hiddenSize);<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 25 X1</span>b1 = theta(<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>*hiddenSize*visibleSize+<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>*hiddenSize*visibleSize+hiddenSize);<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 64 x 1</span>b2 = theta(<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>*hiddenSize*visibleSize+hiddenSize+<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">end</span>);<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Cost and gradient variables (your code needs to compute these values). </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Here, we initialize them to zeros. </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% costFunction 的第一项</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%{</span>J_sparse = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>;W1grad = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">zeros</span>(<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">size</span>(W1)); W2grad = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">zeros</span>(<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">size</span>(W2));b1grad = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">zeros</span>(<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">size</span>(b1)); b2grad = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">zeros</span>(<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">size</span>(b2));<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%}</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% ---------- YOUR CODE HERE --------------------------------------</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% and the corresponding gradients W1grad, W2grad, b1grad, b2grad.</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Note that W1grad has the same dimensions as W1, b1grad has the same dimensions</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% as b1, etc. Your code should set W1grad to be the partial derivative of J_sparse(W,b) with</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% respect to W1. I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b) </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% with respect to the input parameter W1(i,j). Thus, W1grad should be equal to the term </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% [(1/m) \Delta W^{(1)} + \lambda W^{(1)}] in the last block of pseudo-code in Section 2.2 </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% of the lecture notes (and similarly for W2grad, b1grad, b2grad).</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Stated differently, if we were using batch gradient descent to optimize the parameters,</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2. </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 批量梯度下降法的一次迭代 data 64x10000</span>numPatches = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">size</span>(data,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>);KLdist = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>;<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 25x10000</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%a2 = zeros(size(W1,1),numPatches);</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 64x10000</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%a3 = zeros(size(W2,1),numPatches);</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% 向前传输</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 25x10000 25x64 64x10000 </span>a2 = sigmoid(W1*data+<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">repmat</span>(b1,<span class="hljs-matrix" style="box-sizing: border-box;">[<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,numPatches]</span>));p = sum(a2,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>);a3 = sigmoid(W2 * a2 + <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">repmat</span>(b2,<span class="hljs-matrix" style="box-sizing: border-box;">[<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,numPatches]</span>));J_sparse = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0.5</span> * sum(sum((a3-data).^<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>));<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%{</span><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">for</span> curPatch = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:numPatches <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 计算激活值 </span> <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 25 X1 第二层的激活值 25x64 64x1</span> a2(:,curPatch) = sigmoid(W1 * data(:,curPatch) + b1); <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 计算隐藏层神经元的总激活值</span> p = p + a2(:,curPatch); <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 64 x1 第三层的激活值</span> a3(:,curPatch) = sigmoid(W2 * a2(:,curPatch) +b2); <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 计算costFunction的第一项</span> J_sparse = J_sparse + <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0.5</span> * (a3(:,curPatch)-data(:,curPatch))<span class="hljs-string" style="color: rgb(0, 136, 0); box-sizing: border-box;">' * (a3(:,curPatch)-data(:,curPatch)) ;end%}%% 计算 隐藏层的平均激活度p = p / numPatches ;%% 向后传输 %64x10000 residual3 = -(data-a3).*a3.*(1-a3); %25x10000 tmp = beta * ( - sparsityParam ./ p + (1-sparsityParam) ./ (1-p)); % 25x10000 25x64 64x10000 residual2 = (W2'</span> * residual3 + <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">repmat</span>(tmp,<span class="hljs-matrix" style="box-sizing: border-box;">[<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,numPatches]</span>)) .* <span class="hljs-transposed_variable" style="box-sizing: border-box;">a2.</span>*(<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>-a2); W2grad = residual3 * <span class="hljs-transposed_variable" style="box-sizing: border-box;">a2'</span> / numPatches + lambda * W2 ; W1grad = residual2 * <span class="hljs-transposed_variable" style="box-sizing: border-box;">data'</span> / numPatches + lambda * W1 ; b2grad = sum(residual3,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>) / numPatches; b1grad = sum(residual2,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>) / numPatches; <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%{</span><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">for</span> curPatch = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:numPatches <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 计算残差 64x1 </span> <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% residual3 = -( data(:,curPatch) - a3(:,curPatch)) .* (a3 - a3.^2);</span> residual3 = -(data(:,curPatch) - a3(:,curPatch)).* (a3(:,curPatch) - (a3(:,curPatch).^<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>)); <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 25x1 25x 64 * 64X1 ==> 25X1 .* 25X1</span> residual2 = (<span class="hljs-transposed_variable" style="box-sizing: border-box;">W2'</span> * residual3 + <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">beta</span> * (- sparsityParam ./ p + (<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>-sparsityParam) ./ (<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>-p))) .* (a2(:,curPatch) - (a2(:,curPatch)).^<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>); <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% residual2 = (W2' * residual3 ) .* (a2(:,curPatch) - (a2(:,curPatch)).^2);</span> <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 计算偏导数值</span> <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 64 x25 = 64x1 1x25</span> W2grad = W2grad + residual3 * a2(:,curPatch)<span class="hljs-string" style="color: rgb(0, 136, 0); box-sizing: border-box;">'; % 64 x1 = 64x1 b2grad = b2grad + residual3; % 25x64 = 25x1 * 1x64 W1grad = W1grad + residual2 * data(:,curPatch)'</span>; <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 25x1 = 25x1</span> b1grad = b1grad + residual2; <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%J_sparse = J_sparse + (a3 - data(:,curPatch))' * (a3 - data(:,curPatch));</span><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">end</span>W2grad = W2grad / numPatches + lambda * W2;W1grad = W1grad / numPatches + lambda * W1;b2grad = b2grad / numPatches;b1grad = b1grad / numPatches; <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%}</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% 更新权重参数 加上 lambda 权重衰减</span>W2 = W2 - alpha * ( W2grad );W1 = W1 - alpha * ( W1grad );b2 = b2 - alpha * (b2grad );b1 = b1 - alpha * (b1grad );<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% 计算KL相对熵</span><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">for</span> <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">j</span> = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:hiddenSize KLdist = KLdist + sparsityParam *<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">log</span>( sparsityParam / p(<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">j</span>) ) + (<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span> - sparsityParam) * <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">log</span>((<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>-sparsityParam) / (<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span> - p(<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">j</span>)));<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">end</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% costFunction 加上 lambda 权重衰减</span>cost = J_sparse / numPatches + (sum(sum(<span class="hljs-transposed_variable" style="box-sizing: border-box;">W1.</span>^<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>)) + sum(sum(<span class="hljs-transposed_variable" style="box-sizing: border-box;">W2.</span>^<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>))) * lambda / <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span> + <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">beta</span> * KLdist;<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%cost = J_sparse / numPatches + (sum(sum(W1.^2)) + sum(sum(W2.^2))) * lambda / 2;</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%-------------------------------------------------------------------</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% After computing the cost and gradient, we will convert the gradients back</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% to a vector format (suitable for minFunc). Specifically, we will unroll</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% your gradient matrices into a vector.</span>grad = <span class="hljs-matrix" style="box-sizing: border-box;">[W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)]</span>;<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">end</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%-------------------------------------------------------------------</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Here's an implementation of the sigmoid function, which you may find useful</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% in your computation of the costs and the gradients. This inputs a (row or</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). </span><span class="hljs-function" style="box-sizing: border-box;"><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">function</span> <span class="hljs-title" style="box-sizing: border-box;">sigm</span> = <span class="hljs-title" style="box-sizing: border-box;">sigmoid</span><span class="hljs-params" style="color: rgb(102, 0, 102); box-sizing: border-box;">(x)</span></span> sigm = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span> ./ (<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span> + <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">exp</span>(-x));<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">end</span></code><ul class="pre-numbering" style="box-sizing: border-box; position: absolute; width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; border-right-width: 1px; border-right-style: solid; border-right-color: rgb(221, 221, 221); list-style: none; text-align: right; background-color: rgb(238, 238, 238);"><li style="box-sizing: border-box; padding: 0px 5px;">1</li><li style="box-sizing: border-box; padding: 0px 5px;">2</li><li style="box-sizing: border-box; padding: 0px 5px;">3</li><li style="box-sizing: border-box; padding: 0px 5px;">4</li><li style="box-sizing: border-box; padding: 0px 5px;">5</li><li style="box-sizing: border-box; padding: 0px 5px;">6</li><li style="box-sizing: border-box; padding: 0px 5px;">7</li><li style="box-sizing: border-box; padding: 0px 5px;">8</li><li style="box-sizing: border-box; padding: 0px 5px;">9</li><li style="box-sizing: border-box; padding: 0px 5px;">10</li><li style="box-sizing: border-box; padding: 0px 5px;">11</li><li style="box-sizing: border-box; padding: 0px 5px;">12</li><li style="box-sizing: border-box; padding: 0px 5px;">13</li><li style="box-sizing: border-box; padding: 0px 5px;">14</li><li style="box-sizing: border-box; padding: 0px 5px;">15</li><li style="box-sizing: border-box; padding: 0px 5px;">16</li><li style="box-sizing: border-box; padding: 0px 5px;">17</li><li style="box-sizing: border-box; padding: 0px 5px;">18</li><li style="box-sizing: border-box; padding: 0px 5px;">19</li><li style="box-sizing: border-box; padding: 0px 5px;">20</li><li style="box-sizing: border-box; padding: 0px 5px;">21</li><li style="box-sizing: border-box; padding: 0px 5px;">22</li><li style="box-sizing: border-box; padding: 0px 5px;">23</li><li style="box-sizing: border-box; padding: 0px 5px;">24</li><li style="box-sizing: border-box; padding: 0px 5px;">25</li><li style="box-sizing: border-box; padding: 0px 5px;">26</li><li style="box-sizing: border-box; padding: 0px 5px;">27</li><li style="box-sizing: border-box; padding: 0px 5px;">28</li><li style="box-sizing: border-box; padding: 0px 5px;">29</li><li style="box-sizing: border-box; padding: 0px 5px;">30</li><li style="box-sizing: border-box; padding: 0px 5px;">31</li><li style="box-sizing: border-box; padding: 0px 5px;">32</li><li style="box-sizing: border-box; padding: 0px 5px;">33</li><li style="box-sizing: border-box; padding: 0px 5px;">34</li><li style="box-sizing: border-box; padding: 0px 5px;">35</li><li style="box-sizing: border-box; padding: 0px 5px;">36</li><li style="box-sizing: border-box; padding: 0px 5px;">37</li><li style="box-sizing: border-box; padding: 0px 5px;">38</li><li style="box-sizing: border-box; padding: 0px 5px;">39</li><li style="box-sizing: border-box; padding: 0px 5px;">40</li><li style="box-sizing: border-box; padding: 0px 5px;">41</li><li style="box-sizing: border-box; padding: 0px 5px;">42</li><li style="box-sizing: border-box; padding: 0px 5px;">43</li><li style="box-sizing: border-box; padding: 0px 5px;">44</li><li style="box-sizing: border-box; padding: 0px 5px;">45</li><li style="box-sizing: border-box; padding: 0px 5px;">46</li><li style="box-sizing: border-box; padding: 0px 5px;">47</li><li style="box-sizing: border-box; padding: 0px 5px;">48</li><li style="box-sizing: border-box; padding: 0px 5px;">49</li><li style="box-sizing: border-box; padding: 0px 5px;">50</li><li style="box-sizing: border-box; padding: 0px 5px;">51</li><li style="box-sizing: border-box; padding: 0px 5px;">52</li><li style="box-sizing: border-box; padding: 0px 5px;">53</li><li style="box-sizing: border-box; padding: 0px 5px;">54</li><li style="box-sizing: border-box; padding: 0px 5px;">55</li><li style="box-sizing: border-box; padding: 0px 5px;">56</li><li style="box-sizing: border-box; padding: 0px 5px;">57</li><li style="box-sizing: border-box; padding: 0px 5px;">58</li><li style="box-sizing: border-box; padding: 0px 5px;">59</li><li style="box-sizing: border-box; padding: 0px 5px;">60</li><li style="box-sizing: border-box; padding: 0px 5px;">61</li><li style="box-sizing: border-box; padding: 0px 5px;">62</li><li style="box-sizing: border-box; padding: 0px 5px;">63</li><li style="box-sizing: border-box; padding: 0px 5px;">64</li><li style="box-sizing: border-box; padding: 0px 5px;">65</li><li style="box-sizing: border-box; padding: 0px 5px;">66</li><li style="box-sizing: border-box; padding: 0px 5px;">67</li><li style="box-sizing: border-box; padding: 0px 5px;">68</li><li style="box-sizing: border-box; padding: 0px 5px;">69</li><li style="box-sizing: border-box; padding: 0px 5px;">70</li><li style="box-sizing: border-box; padding: 0px 5px;">71</li><li style="box-sizing: border-box; padding: 0px 5px;">72</li><li style="box-sizing: border-box; padding: 0px 5px;">73</li><li style="box-sizing: border-box; padding: 0px 5px;">74</li><li style="box-sizing: border-box; padding: 0px 5px;">75</li><li style="box-sizing: border-box; padding: 0px 5px;">76</li><li style="box-sizing: border-box; padding: 0px 5px;">77</li><li style="box-sizing: border-box; padding: 0px 5px;">78</li><li style="box-sizing: border-box; padding: 0px 5px;">79</li><li style="box-sizing: border-box; padding: 0px 5px;">80</li><li style="box-sizing: border-box; padding: 0px 5px;">81</li><li style="box-sizing: border-box; padding: 0px 5px;">82</li><li style="box-sizing: border-box; padding: 0px 5px;">83</li><li style="box-sizing: border-box; padding: 0px 5px;">84</li><li style="box-sizing: border-box; padding: 0px 5px;">85</li><li style="box-sizing: border-box; padding: 0px 5px;">86</li><li style="box-sizing: border-box; padding: 0px 5px;">87</li><li style="box-sizing: border-box; padding: 0px 5px;">88</li><li style="box-sizing: border-box; padding: 0px 5px;">89</li><li style="box-sizing: border-box; padding: 0px 5px;">90</li><li style="box-sizing: border-box; padding: 0px 5px;">91</li><li style="box-sizing: border-box; padding: 0px 5px;">92</li><li style="box-sizing: border-box; padding: 0px 5px;">93</li><li style="box-sizing: border-box; padding: 0px 5px;">94</li><li style="box-sizing: border-box; padding: 0px 5px;">95</li><li style="box-sizing: border-box; padding: 0px 5px;">96</li><li style="box-sizing: border-box; padding: 0px 5px;">97</li><li style="box-sizing: border-box; padding: 0px 5px;">98</li><li style="box-sizing: border-box; padding: 0px 5px;">99</li><li style="box-sizing: border-box; padding: 0px 5px;">100</li><li style="box-sizing: border-box; padding: 0px 5px;">101</li><li style="box-sizing: border-box; padding: 0px 5px;">102</li><li style="box-sizing: border-box; padding: 0px 5px;">103</li><li style="box-sizing: border-box; padding: 0px 5px;">104</li><li style="box-sizing: border-box; padding: 0px 5px;">105</li><li style="box-sizing: border-box; padding: 0px 5px;">106</li><li style="box-sizing: border-box; padding: 0px 5px;">107</li><li style="box-sizing: border-box; padding: 0px 5px;">108</li><li style="box-sizing: border-box; padding: 0px 5px;">109</li><li style="box-sizing: border-box; padding: 0px 5px;">110</li><li style="box-sizing: border-box; padding: 0px 5px;">111</li><li style="box-sizing: border-box; padding: 0px 5px;">112</li><li style="box-sizing: border-box; padding: 0px 5px;">113</li><li style="box-sizing: border-box; padding: 0px 5px;">114</li><li style="box-sizing: border-box; padding: 0px 5px;">115</li><li style="box-sizing: border-box; padding: 0px 5px;">116</li><li style="box-sizing: border-box; padding: 0px 5px;">117</li><li style="box-sizing: border-box; padding: 0px 5px;">118</li><li style="box-sizing: border-box; padding: 0px 5px;">119</li><li style="box-sizing: border-box; padding: 0px 5px;">120</li><li style="box-sizing: border-box; padding: 0px 5px;">121</li><li style="box-sizing: border-box; padding: 0px 5px;">122</li><li style="box-sizing: border-box; padding: 0px 5px;">123</li><li style="box-sizing: border-box; padding: 0px 5px;">124</li><li style="box-sizing: border-box; padding: 0px 5px;">125</li><li style="box-sizing: border-box; padding: 0px 5px;">126</li><li style="box-sizing: border-box; padding: 0px 5px;">127</li><li style="box-sizing: border-box; padding: 0px 5px;">128</li><li style="box-sizing: border-box; padding: 0px 5px;">129</li><li style="box-sizing: border-box; padding: 0px 5px;">130</li><li style="box-sizing: border-box; padding: 0px 5px;">131</li><li style="box-sizing: border-box; padding: 0px 5px;">132</li><li style="box-sizing: border-box; padding: 0px 5px;">133</li><li style="box-sizing: border-box; padding: 0px 5px;">134</li><li style="box-sizing: border-box; padding: 0px 5px;">135</li><li style="box-sizing: border-box; padding: 0px 5px;">136</li><li style="box-sizing: border-box; padding: 0px 5px;">137</li><li style="box-sizing: border-box; padding: 0px 5px;">138</li><li style="box-sizing: border-box; padding: 0px 5px;">139</li><li style="box-sizing: border-box; padding: 0px 5px;">140</li><li style="box-sizing: border-box; padding: 0px 5px;">141</li><li style="box-sizing: border-box; padding: 0px 5px;">142</li><li style="box-sizing: border-box; padding: 0px 5px;">143</li><li style="box-sizing: border-box; padding: 0px 5px;">144</li><li style="box-sizing: border-box; padding: 0px 5px;">145</li><li style="box-sizing: border-box; padding: 0px 5px;">146</li><li style="box-sizing: border-box; padding: 0px 5px;">147</li><li style="box-sizing: border-box; padding: 0px 5px;">148</li><li style="box-sizing: border-box; padding: 0px 5px;">149</li><li style="box-sizing: border-box; padding: 0px 5px;">150</li><li style="box-sizing: border-box; padding: 0px 5px;">151</li><li style="box-sizing: border-box; padding: 0px 5px;">152</li><li style="box-sizing: border-box; padding: 0px 5px;">153</li><li style="box-sizing: border-box; padding: 0px 5px;">154</li><li style="box-sizing: border-box; padding: 0px 5px;">155</li><li style="box-sizing: border-box; padding: 0px 5px;">156</li><li style="box-sizing: border-box; padding: 0px 5px;">157</li><li style="box-sizing: border-box; padding: 0px 5px;">158</li><li style="box-sizing: border-box; padding: 0px 5px;">159</li><li style="box-sizing: border-box; padding: 0px 5px;">160</li><li style="box-sizing: border-box; padding: 0px 5px;">161</li><li style="box-sizing: border-box; padding: 0px 5px;">162</li><li style="box-sizing: border-box; padding: 0px 5px;">163</li><li style="box-sizing: border-box; padding: 0px 5px;">164</li><li style="box-sizing: border-box; padding: 0px 5px;">165</li><li style="box-sizing: border-box; padding: 0px 5px;">166</li><li style="box-sizing: border-box; padding: 0px 5px;">167</li><li style="box-sizing: border-box; padding: 0px 5px;">168</li><li style="box-sizing: border-box; padding: 0px 5px;">169</li><li style="box-sizing: border-box; padding: 0px 5px;">170</li><li style="box-sizing: border-box; padding: 0px 5px;">171</li><li style="box-sizing: border-box; padding: 0px 5px;">172</li><li style="box-sizing: border-box; padding: 0px 5px;">173</li></ul>
代码中包含了向量方式实现和非向量方式实现。向量方式实现代码量少,运行速度也很快。代码中注释写得很清楚了,就不说了。
接下来是实现第二个例子,我们将从如下的图像中学习其中包含的特征,这里的输入图像是 28x28 ,隐藏层单元是 196 个,算法使用向量化编程,不然又得跑很久了吼吼吼。
原始图像如下:
最终学习得到的图像如下:
代码如下:
<code class="language-matlab hljs has-numbering" style="display: block; padding: 0px; color: inherit; box-sizing: border-box; font-family: 'Source Code Pro', monospace;font-size:undefined; white-space: pre; border-top-left-radius: 0px; border-top-right-radius: 0px; border-bottom-right-radius: 0px; border-bottom-left-radius: 0px; word-wrap: normal; background: transparent;"><span class="hljs-function" style="box-sizing: border-box;"><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">function</span> <span class="hljs-params" style="color: rgb(102, 0, 102); box-sizing: border-box;">[cost,grad]</span> = <span class="hljs-title" style="box-sizing: border-box;">sparseAutoencoderCost</span><span class="hljs-params" style="color: rgb(102, 0, 102); box-sizing: border-box;">(theta, visibleSize, hiddenSize, ... lambda, sparsityParam, beta, data)</span></span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%lambda = 0;</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%beta = 0;</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% visibleSize: the number of input units (probably 64) </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% hiddenSize: the number of hidden units (probably 25) </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% lambda: weight decay parameter</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% sparsityParam: The desired average activation for the hidden units (denoted in the lecture</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% notes by the greek alphabet rho, which looks like a lower-case "p").</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% beta: weight of sparsity penalty term</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example. </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% The input theta is a vector (because minFunc expects the parameters to be a vector). </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% follows the notation convention of the lecture notes. </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 学习率 自己定义的</span>alpha = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0.03</span>;<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% 计算隐藏层神经元的激活度</span>p = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">zeros</span>(hiddenSize,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>);W1 = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">reshape</span>(theta(<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:hiddenSize*visibleSize), hiddenSize, visibleSize);W2 = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">reshape</span>(theta(hiddenSize*visibleSize+<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>*hiddenSize*visibleSize), visibleSize, hiddenSize);b1 = theta(<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>*hiddenSize*visibleSize+<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>*hiddenSize*visibleSize+hiddenSize);b2 = theta(<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>*hiddenSize*visibleSize+hiddenSize+<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">end</span>);<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Cost and gradient variables (your code needs to compute these values). </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Here, we initialize them to zeros. </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% ---------- YOUR CODE HERE --------------------------------------</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% and the corresponding gradients W1grad, W2grad, b1grad, b2grad.</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Note that W1grad has the same dimensions as W1, b1grad has the same dimensions</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% as b1, etc. Your code should set W1grad to be the partial derivative of J_sparse(W,b) with</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% respect to W1. I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b) </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% with respect to the input parameter W1(i,j). Thus, W1grad should be equal to the term </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% [(1/m) \Delta W^{(1)} + \lambda W^{(1)}] in the last block of pseudo-code in Section 2.2 </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% of the lecture notes (and similarly for W2grad, b1grad, b2grad).</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Stated differently, if we were using batch gradient descent to optimize the parameters,</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2. </span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% </span>numPatches = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">size</span>(data,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>);KLdist = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>;<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% 向前传输</span>a2 = sigmoid(W1*data+<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">repmat</span>(b1,<span class="hljs-matrix" style="box-sizing: border-box;">[<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,numPatches]</span>));p = sum(a2,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>);a3 = sigmoid(W2 * a2 + <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">repmat</span>(b2,<span class="hljs-matrix" style="box-sizing: border-box;">[<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,numPatches]</span>));J_sparse = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0.5</span> * sum(sum((a3-data).^<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>));<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% 计算 隐藏层的平均激活度</span>p = p / numPatches ;<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% 向后传输 </span> residual3 = -(data-a3).*<span class="hljs-transposed_variable" style="box-sizing: border-box;">a3.</span>*(<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>-a3); tmp = <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">beta</span> * ( - sparsityParam ./ p + (<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>-sparsityParam) ./ (<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>-p)); residual2 = (<span class="hljs-transposed_variable" style="box-sizing: border-box;">W2'</span> * residual3 + <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">repmat</span>(tmp,<span class="hljs-matrix" style="box-sizing: border-box;">[<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,numPatches]</span>)) .* <span class="hljs-transposed_variable" style="box-sizing: border-box;">a2.</span>*(<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>-a2); W2grad = residual3 * <span class="hljs-transposed_variable" style="box-sizing: border-box;">a2'</span> / numPatches + lambda * W2 ; W1grad = residual2 * <span class="hljs-transposed_variable" style="box-sizing: border-box;">data'</span> / numPatches + lambda * W1 ; b2grad = sum(residual3,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>) / numPatches; b1grad = sum(residual2,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>) / numPatches; <span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% 更新权重参数 加上 lambda 权重衰减</span>W2 = W2 - alpha * ( W2grad );W1 = W1 - alpha * ( W1grad );b2 = b2 - alpha * (b2grad );b1 = b1 - alpha * (b1grad );<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% 计算KL相对熵</span><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">for</span> <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">j</span> = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>:hiddenSize KLdist = KLdist + sparsityParam *<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">log</span>( sparsityParam / p(<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">j</span>) ) + (<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span> - sparsityParam) * <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">log</span>((<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>-sparsityParam) / (<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span> - p(<span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">j</span>)));<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">end</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%% costFunction 加上 lambda 权重衰减</span>cost = J_sparse / numPatches + (sum(sum(<span class="hljs-transposed_variable" style="box-sizing: border-box;">W1.</span>^<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>)) + sum(sum(<span class="hljs-transposed_variable" style="box-sizing: border-box;">W2.</span>^<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span>))) * lambda / <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">2</span> + <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">beta</span> * KLdist;<span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%-------------------------------------------------------------------</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% After computing the cost and gradient, we will convert the gradients back</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% to a vector format (suitable for minFunc). Specifically, we will unroll</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% your gradient matrices into a vector.</span>grad = <span class="hljs-matrix" style="box-sizing: border-box;">[W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)]</span>;<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">end</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">%-------------------------------------------------------------------</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% Here's an implementation of the sigmoid function, which you may find useful</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% in your computation of the costs and the gradients. This inputs a (row or</span><span class="hljs-comment" style="color: rgb(136, 0, 0); box-sizing: border-box;">% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). </span><span class="hljs-function" style="box-sizing: border-box;"><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">function</span> <span class="hljs-title" style="box-sizing: border-box;">sigm</span> = <span class="hljs-title" style="box-sizing: border-box;">sigmoid</span><span class="hljs-params" style="color: rgb(102, 0, 102); box-sizing: border-box;">(x)</span></span> sigm = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span> ./ (<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span> + <span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">exp</span>(-x));<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">end</span></code>
- DeepLearning(二) 自编码算法与稀疏性理解与实战
- DeepLearning(二) 自编码算法与稀疏性理解与实战
- 自编码算法与稀疏性
- 自编码算法与稀疏性
- 自编码算法与稀疏性
- 自编码算法与稀疏性
- 自编码算法与稀疏性
- 自编码算法与稀疏性
- 自编码算法与稀疏性
- sparse Autoencoder(3)---自编码算法与稀疏性
- Stanford UFLDL教程 自编码算法与稀疏性
- 自编码算法与稀疏性(KL散度诱导稀疏)
- 神经网络之自编码与稀疏性
- UFLDL 笔记 04 自编码算法与稀疏性 Autoencoders and Sparsity
- 深度学习笔记(四)---自编码算法与稀疏性
- DeepLearning (四) 基于自编码算法与softmax回归的手写数字识别
- UFLDL教程笔记及练习答案六(稀疏编码与稀疏编码自编码表达)
- deeplearning系列(二)自编码神经网络
- leetcode--Unique Paths
- postgresql创建空间数据库
- OGNL表达式的基本语法和用法
- 野指针 空指针
- leetcode-76 Minimum Window Substring
- DeepLearning(二) 自编码算法与稀疏性理解与实战
- 判断游戏胜者-Who Is the Winner
- C++二叉查找树实现过程详解
- 行编辑命令的实现。
- linux中grep的使用
- (10.1.3.2)浅谈扁平化设计—– keep it simple
- OpenCV学习入门(四):RNG 伪随机问题
- 1028. 人口普查(20)
- const char*, char const*, char*const的区别与记忆