Machine Learning week 5 quiz: programming assignment-Multi-Neural Network Learning
来源:互联网 发布:像程序员一样思考pdf 编辑:程序博客网 时间:2024/05/17 08:35
一、ex4.m
%% Machine Learning Online Class - Exercise 4 Neural Network Learning% Instructions% ------------% % This file contains code that helps you get started on the% linear exercise. You will need to complete the following functions % in this exericse:%% sigmoidGradient.m% randInitializeWeights.m% nnCostFunction.m%% For this exercise, you will not need to change any code in this file,% or any other files other than those mentioned above.%%% Initializationclear ; close all; clc%% Setup the parameters you will use for this exerciseinput_layer_size = 400; % 20x20 Input Images of Digits %400 input unitshidden_layer_size = 25; % 25 hidden unitsnum_labels = 10; % 10 labels, from 1 to 10 % (note that we have mapped "0" to label 10)%% =========== Part 1: Loading and Visualizing Data =============% We start the exercise by first loading and visualizing the dataset. % You will be working with a dataset that contains handwritten digits.%% Load Training Datafprintf('Loading and Visualizing Data ...\n')load('ex4data1.mat');m = size(X, 1);% Randomly select 100 data points to displaysel = randperm(size(X, 1)); %置乱sel = sel(1:100);displayData(X(sel, :));fprintf('Program paused. Press enter to continue.\n');pause;%% ================ Part 2: Loading Parameters ================% In this part of the exercise, we load some pre-initialized % neural network parameters.fprintf('\nLoading Saved Neural Network Parameters ...\n')% Load the weights into variables Theta1 and Theta2load('ex4weights.mat');% Unroll parameters nn_params = [Theta1(:) ; Theta2(:)];%% ================ Part 3: Compute Cost (Feedforward) ================% To the neural network, you should first start by implementing the% feedforward part of the neural network that returns the cost only. You% should complete the code in nnCostFunction.m to return cost. After% implementing the feedforward to compute the cost, you can verify that% your implementation is correct by verifying that you get the same cost% as us for the fixed debugging parameters.%% We suggest implementing the feedforward cost *without* regularization% first so that it will be easier for you to debug. Later, in part 4, you% will get to implement the regularized cost.%fprintf('\nFeedforward Using Neural Network ...\n')% Weight regularization parameter (we set this to 0 here).lambda = 0;J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ... num_labels, X, y, lambda);fprintf(['Cost at parameters (loaded from ex4weights): %f '... '\n(this value should be about 0.287629)\n'], J);fprintf('\nProgram paused. Press enter to continue.\n');pause;%% =============== Part 4: Implement Regularization ===============% Once your cost function implementation is correct, you should now% continue to implement the regularization with the cost.%fprintf('\nChecking Cost Function (w/ Regularization) ... \n')% Weight regularization parameter (we set this to 1 here).lambda = 1;J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ... num_labels, X, y, lambda);fprintf(['Cost at parameters (loaded from ex4weights): %f '... '\n(this value should be about 0.383770)\n'], J);fprintf('Program paused. Press enter to continue.\n');pause;%% ================ Part 5: Sigmoid Gradient ================% Before you start implementing the neural network, you will first% implement the gradient for the sigmoid function. You should complete the% code in the sigmoidGradient.m file.%fprintf('\nEvaluating sigmoid gradient...\n')g = sigmoidGradient([1 -0.5 0 0.5 1]);fprintf('Sigmoid gradient evaluated at [1 -0.5 0 0.5 1]:\n ');fprintf('%f ', g);fprintf('\n\n');fprintf('Program paused. Press enter to continue.\n');pause;%% ================ Part 6: Initializing Pameters ================% In this part of the exercise, you will be starting to implment a two% layer neural network that classifies digits. You will start by% implementing a function to initialize the weights of the neural network% (randInitializeWeights.m)fprintf('\nInitializing Neural Network Parameters ...\n')initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels);% Unroll parametersinitial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];%% =============== Part 7: Implement Backpropagation ===============% Once your cost matches up with ours, you should proceed to implement the% backpropagation algorithm for the neural network. You should add to the% code you've written in nnCostFunction.m to return the partial% derivatives of the parameters.%fprintf('\nChecking Backpropagation... \n');% Check gradients by running checkNNGradientscheckNNGradients;fprintf('\nProgram paused. Press enter to continue.\n');pause;%% =============== Part 8: Implement Regularization ===============% Once your backpropagation implementation is correct, you should now% continue to implement the regularization with the cost and gradient.%fprintf('\nChecking Backpropagation (w/ Regularization) ... \n')% Check gradients by running checkNNGradientslambda = 3;checkNNGradients(lambda);% Also output the costFunction debugging valuesdebug_J = nnCostFunction(nn_params, input_layer_size, ... hidden_layer_size, num_labels, X, y, lambda);fprintf(['\n\nCost at (fixed) debugging parameters (w/ lambda = 10): %f ' ... '\n(this value should be about 0.576051)\n\n'], debug_J);fprintf('Program paused. Press enter to continue.\n');pause;%% =================== Part 8: Training NN ===================% You have now implemented all the code necessary to train a neural % network. To train your neural network, we will now use "fmincg", which% is a function which works similarly to "fminunc". Recall that these% advanced optimizers are able to train our cost functions efficiently as% long as we provide them with the gradient computations.%fprintf('\nTraining Neural Network... \n')% After you have completed the assignment, change the MaxIter to a larger% value to see how more training helps.options = optimset('MaxIter', 50);% You should also try different values of lambdalambda = 1;% Create "short hand" for the cost function to be minimizedcostFunction = @(p) nnCostFunction(p, ... input_layer_size, ... hidden_layer_size, ... num_labels, X, y, lambda);% Now, costFunction is a function that takes in only one argument (the% neural network parameters)[nn_params, cost] = fmincg(costFunction, initial_nn_params, options);% Obtain Theta1 and Theta2 back from nn_paramsTheta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ... hidden_layer_size, (input_layer_size + 1));Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ... num_labels, (hidden_layer_size + 1));fprintf('Program paused. Press enter to continue.\n');pause;%% ================= Part 9: Visualize Weights =================% You can now "visualize" what the neural network is learning by % displaying the hidden units to see what features they are capturing in % the data.fprintf('\nVisualizing Neural Network... \n')displayData(Theta1(:, 2:end));fprintf('\nProgram paused. Press enter to continue.\n');pause;%% ================= Part 10: Implement Predict =================% After training the neural network, we would like to use it to predict% the labels. You will now implement the "predict" function to use the% neural network to predict the labels of the training set. This lets% you compute the training set accuracy.pred = predict(Theta1, Theta2, X);fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);
二、nnCostFunction.m
function [J grad] = nnCostFunction(nn_params, ... input_layer_size, ... hidden_layer_size, ... num_labels, ... X, y, lambda)%NNCOSTFUNCTION Implements the neural network cost function for a two layer%neural network which performs classification% [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...% X, y, lambda) computes the cost and gradient of the neural network. The% parameters for the neural network are "unrolled" into the vector% nn_params and need to be converted back into the weight matrices. % % The returned parameter grad should be a "unrolled" vector of the% partial derivatives of the neural network.%% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices% for our 2 layer neural networkTheta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ... hidden_layer_size, (input_layer_size + 1));Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ... num_labels, (hidden_layer_size + 1));% Setup some useful variablesm = size(X, 1); % m % You need to return the following variables correctly J = 0; % 1*1Theta1_grad = zeros(size(Theta1)); % hidden_layer_size*1Theta2_grad = zeros(size(Theta2)); % num_labels*1% ====================== YOUR CODE HERE ======================% Instructions: You should complete the code by working through the% following parts.%% Part 1: Feedforward the neural network and return the cost in the% variable J. After implementing Part 1, you can verify that your% cost function computation is correct by verifying the cost% computed in ex4.m%% Part 2: Implement the backpropagation algorithm to compute the gradients% Theta1_grad and Theta2_grad. You should return the partial derivatives of% the cost function with respect to Theta1 and Theta2 in Theta1_grad and% Theta2_grad, respectively. After implementing Part 2, you can check% that your implementation is correct by running checkNNGradients%% Note: The vector y passed into the function is a vector of labels% containing values from 1..K. You need to map this vector into a % binary vector of 1's and 0's to be used with the neural network% cost function.%% Hint: We recommend implementing backpropagation using a for-loop% over the training examples if you are implementing it for the % first time.%% Part 3: Implement regularization with the cost function and gradients.%% Hint: You can implement this around the code for% backpropagation. That is, you can compute the gradients for% the regularization separately and then add them to Theta1_grad% and Theta2_grad from Part 2.%X = [ones(m, 1), X]; % add ones m*(n+1)a1 = X;z2 = Theta1 * X' ; a2 = sigmoid(z2);a2 = [ones(m, 1), a2']; z3 = Theta2 * a2'; a3 = sigmoid(z3);h = a3; % m*ky_temp = zeros(num_labels, m); for i = 1:my_temp(y(i), i) = 1;endpart1 = y_temp .* log(h);part2 = (1-y_temp) .* log((1-h));sum1 = sum(sum(-part1 - part2));J_ori = sum1 / m;% regularized cost functionpunish_Theta1 = sum(sum(Theta1(:, 2:end).^2));punish_Theta2 = sum(sum(Theta2(:, 2:end).^2));J = J_ori + lambda/2/m*(punish_Theta1 + punish_Theta2);%BPfor t = 1:ma1 = X(t, :);z2 = Theta1 * a1';a2 = sigmoid(z2);a2 = [1; a2];z3 = Theta2 * a2;a3 = sigmoid(z3);z2 = [1; z2];delta3 = a3 - y_temp(:, t);delta2 = (Theta2' * delta3) .* sigmoidGradient(z2);delta2 = delta2(2:end);Theta2_grad = Theta2_grad + delta3 * a2';Theta1_grad = Theta1_grad + delta2 * a1;endTheta2_grad = Theta2_grad / m;Theta1_grad = Theta1_grad / m;% Regularized reg_theta1 = Theta1(:, 2:end) * lambda/m;reg_theta2 = Theta2(:, 2:end) * lambda/m;Theta1_grad(:, 2:end) = Theta1_grad(:, 2:end) + reg_theta1;Theta2_grad(:, 2:end) = Theta2_grad(:, 2:end) + reg_theta2;% -------------------------------------------------------------% =========================================================================% Unroll gradientsgrad = [Theta1_grad(:) ; Theta2_grad(:)];end
三、sigmoidGradient.m
function g = sigmoidGradient(z)%SIGMOIDGRADIENT returns the gradient of the sigmoid function%evaluated at z% g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function% evaluated at z. This should work regardless if z is a matrix or a% vector. In particular, if z is a vector or matrix, you should return% the gradient for each element.g = zeros(size(z)); % the same size with z% ====================== YOUR CODE HERE ======================% Instructions: Compute the gradient of the sigmoid function evaluated at% each value of z (z can be a matrix, vector or scalar).h = sigmoid(z);g = h .* (1-h);% =============================================================end
四、submit results
0 0
- Machine Learning week 5 quiz: programming assignment-Multi-Neural Network Learning
- Machine Learning week 4 quiz: programming assignment-Multi-class Classification and Neural Networks
- Machine Learning week 5 programming exercise Neural Network Learning
- Machine Learning week 5 programming exercise Neural Network Learning
- Machine Learning week 5 programming exercise Neural Network Learning
- Machine Learning week 2 quiz: programming assignment-Linear Regression
- Machine Learning week 3 quiz: programming assignment-Logistic Regression
- Machine Learning week 7 quiz: programming assignment-Support Vector Machines
- Machine Learning week 5 quiz: Neural Networks: Learning
- Machine Learning week 6 quiz: programming assignment-Regularized Linear Regression and Bias/Variance
- Machine Learning week 8 quiz: programming assignment-K-Means Clustering and PCA
- Machine Learning week 9 quiz: programming assignment-Anomaly Detection and Recommender Systems
- Coursera Machine Learning 第四周 quiz Programming Exercise 3 Multi-class Classification and Neural
- Machine Learning week 5 Neural Networks Learning
- Machine Learning week 4 quiz: Neural Networks: Representation
- Machine Learning week 4 programming exercise One vs All and Neural network
- Machine Learning week 7 quiz: Unsupervised Learning
- Machine Learning---Neural Network
- MySQL外键约束的禁用与启用命令
- 虚拟机vmware10.0.0里设置Suse Linux Enterprise 11系统静态IP上网
- 黑马程序员 JavaSE-11 IO流
- CMSIS标准
- Android AsyncTask完全解析,带你从源码的角度彻底理解
- Machine Learning week 5 quiz: programming assignment-Multi-Neural Network Learning
- ABAP 动态创建内表
- 神奇的SSH tunnel
- 南大软院大神养成计划--day10
- UIView中间透明周围半透明(四种方法)
- 关于Android Studio运行出现HAX is not working 的解决办法
- iOS时间戳与日期互转
- 记录——《C Primer Plus (第五版)》第十一章编程练习第二题
- 读写文件的代码