文章标题:UFLDL:练习一
来源:互联网 发布:arksz410788k数据手册 编辑:程序博客网 时间:2024/05/16 05:30
PCA and Whitening && Softmax Regression
(1)PCA
%%================================================================x = sampleIMAGESRAW();figure('name','Raw images');randsel = randi(size(x,2),200,1); % A random selection of samples for visualizationdisplay_network(x(:,randsel));%%================================================================avg = mean(x,2);x = x - repmat(avg,1,size(x,2));%%================================================================xRot = zeros(size(x)); % You need to compute this[u,s,v]=svd(x);xRot = u'*x;%%================================================================covar = zeros(size(x, 1)); % You need to compute thiscovar = diag(diag(cov(x')));%%================================================================figure('name','Visualisation of covariance matrix');imagesc(covar);%%================================================================k = 0; % Set k accordinglyegis=eig(covar)egis=sort(egis,'descend')for i=1:size(covar,1) if (sum(egis(1:i))/sum(egis)>0.99) k=i break; endend%%================================================================%%================================================================xHat = zeros(size(x)); % You need to compute thisxHat = u*[xRot(1:k,:);zeros(size(xHat(k+1:end,:)))];% Visualise the data, and compare it to the raw data% You should observe that the raw and processed data are of comparable quality.% For comparison, you may wish to generate a PCA reduced image which% retains only 90% of the variance.figure('name',['PCA processed images ',sprintf('(%d / %d dimensions)', k, size(x, 1)),'']);display_network(xHat(:,randsel));figure('name','Raw images');display_network(x(:,randsel));%%================================================================%% Step 4a: Implement PCA with whitening and regularisation% Implement PCA with whitening and regularisation to produce the matrix% xPCAWhite. epsilon = 0.1;xPCAWhite = zeros(size(x));avg = mean(x, 1); % Compute the mean pixel intensity value separately for each patch. x = x - repmat(avg, size(x, 1), 1);sigma = x * x' / size(x, 2);[U,S,V] = svd(sigma);xRot = U' * x; % rotated version of the data. xTilde = U(:,1:k)' * x; % reduced dimension representation of the data, % where k is the number of eigenvectors to keepxPCAWhite = diag(1./sqrt(diag(S) + epsilon)) * U' * x;%%================================================================% Visualise the covariance matrix. You should see a red line across the% diagonal against a blue background.covar = diag(diag(cov(xPCAWhite')));figure('name','Visualisation of covariance matrix');imagesc(covar);%%================================================================xZCAWhite = zeros(size(x));xZCAWhite = U * diag(1./sqrt(diag(S) + epsilon)) * U' * x;%%================================================================figure('name','ZCA whitened images');display_network(xZCAWhite(:,randsel));figure('name','Raw images');display_network(x(:,randsel));
(2) Softmax Regression
function [cost, grad] = softmaxCost(theta, numClasses, inputSize, lambda, data, labels)% numClasses - the number of classes % inputSize - the size N of the input vector% lambda - weight decay parameter% data - the N x M input matrix, where each column data(:, i) corresponds to% a single test set% labels - an M x 1 matrix containing the labels corresponding for the input data%% Unroll the parameters from thetatheta = reshape(theta, numClasses, inputSize);numCases = size(data, 2);groundTruth = full(sparse(labels, 1:numCases, 1)); %numClasses*Mcost = 0;thetagrad = zeros(numClasses, inputSize);M = theta*data; % (numClasses,N)*(N,M)M = bsxfun(@minus, M, max(M, [], 1));h = exp(M);h = bsxfun(@rdivide, h, sum(h));cost = -1/numCases*sum(sum(groundTruth.*log(h)))+lambda/2*sum(sum(theta.^2));thetagrad = -1/numCases*((groundTruth-h)*data')+lambda*theta;%log(h)下面一段是关键部分没有Vectorization版本的代码%for i=1:numCases% s=groundTruth(:,i).*log(h(:,i));% cost=cost+sum(s);%end%cost=cost*(-1)/numCases+lambda/2*sum(sum(theta.^2));%for i=1:numClasses% for j=1:numCases% %groundTruth(:,j)% %h(:,j)% k=((groundTruth(:,j)-h(:,j))*data(:,j)');% % thetagrad(i,:)=thetagrad(i,:)+k(i,:);% end% thetagrad(i,:)=-thetagrad(i,:)/numCases+lambda*theta(i,:);%endgrad = [thetagrad(:)];end
0 0
- 文章标题:UFLDL:练习一
- 文章标题:UFLDL:练习二
- 文章标题:UFLDL:练习三
- UFLDL练习一(稀疏自编码器 )
- 文章标题 基础练习 十六进制转八进制
- UFLDL练习(Sparse Autoencoder)
- UFLDL练习(Sparse Autoencoder)
- softmax-练习(UFLDL)
- UFLDL稀疏编码器练习实现
- UFLDL练习总结--杨炜炜.HFLS
- 【机器学习】UFLDL练习1
- 【机器学习】UFLDL练习2
- UFLDL教程练习答案一(稀疏自编码器和矢量化编程实现)
- rails 练习8 --将文章标题变成连接
- 文章标题文章标题文章标题文章标题文章标题文章标题文章标题文章标题文章标题文章标题文章标题文章标题文章标题文章标题文章标题文章标题文章标题
- 标题练习
- 文章标题
- 文章标题
- session.flush()与session.clear()的区别及使用环境
- sizeof和strlen
- 2012年5月SAT香港真题解析
- 实用代码段
- redis 学习笔记(4)-HA高可用方案Sentinel配置
- 文章标题:UFLDL:练习一
- 银行系统项目设计
- Unity WWW的用法
- Java 学习笔记14:Spring 数据库数据源DBCP配置说明
- 1013. Battle Over Cities (25)
- 日历
- 计蒜之道 测试赛 D题
- 1.学习笔记---- 数组
- [shell]join两个文件