Machine Learning week 9 quiz: programming assignment-Anomaly Detection and Recommender Systems

来源:互联网 发布:算法:c语言实现 pdf 编辑:程序博客网 时间:2024/05/12 19:16

一、ex8.m

%% Machine Learning Online Class%  Exercise 8 | Anomaly Detection and Collaborative Filtering%%  Instructions%  ------------%%  This file contains code that helps you get started on the%  exercise. You will need to complete the following functions:%%     estimateGaussian.m%     selectThreshold.m%     cofiCostFunc.m%%  For this exercise, you will not need to change any code in this file,%  or any other files other than those mentioned above.%%% Initializationclear ; close all; clc%% ================== Part 1: Load Example Dataset  ===================%  We start this exercise by using a small dataset that is easy to%  visualize.%%  Our example case consists of 2 network server statistics across%  several machines: the latency and throughput of each machine.%  This exercise will help us find possibly faulty (or very fast) machines.%fprintf('Visualizing example dataset for outlier detection.\n\n');%  The following command loads the dataset. You should now have the%  variables X, Xval, yval in your environmentload('ex8data1.mat');%  Visualize the example datasetplot(X(:, 1), X(:, 2), 'bx');axis([0 30 0 30]);xlabel('Latency (ms)');ylabel('Throughput (mb/s)');fprintf('Program paused. Press enter to continue.\n');pause%% ================== Part 2: Estimate the dataset statistics ===================%  For this exercise, we assume a Gaussian distribution for the dataset.%%  We first estimate the parameters of our assumed Gaussian distribution, %  then compute the probabilities for each of the points and then visualize %  both the overall distribution and where each of the points falls in %  terms of that distribution.%fprintf('Visualizing Gaussian fit.\n\n');%  Estimate my and sigma2[mu sigma2] = estimateGaussian(X);%  Returns the density of the multivariate normal at each data point (row) %  of Xp = multivariateGaussian(X, mu, sigma2);%  Visualize the fitvisualizeFit(X,  mu, sigma2);xlabel('Latency (ms)');ylabel('Throughput (mb/s)');fprintf('Program paused. Press enter to continue.\n');pause;%% ================== Part 3: Find Outliers ===================%  Now you will find a good epsilon threshold using a cross-validation set%  probabilities given the estimated Gaussian distribution% pval = multivariateGaussian(Xval, mu, sigma2);[epsilon F1] = selectThreshold(yval, pval);fprintf('Best epsilon found using cross-validation: %e\n', epsilon);fprintf('Best F1 on Cross Validation Set:  %f\n', F1);fprintf('   (you should see a value epsilon of about 8.99e-05)\n\n');%  Find the outliers in the training set and plot theoutliers = find(p < epsilon);%  Draw a red circle around those outliershold onplot(X(outliers, 1), X(outliers, 2), 'ro', 'LineWidth', 2, 'MarkerSize', 10);hold offfprintf('Program paused. Press enter to continue.\n');pause;%% ================== Part 4: Multidimensional Outliers ===================%  We will now use the code from the previous part and apply it to a %  harder problem in which more features describe each datapoint and only %  some features indicate whether a point is an outlier.%%  Loads the second dataset. You should now have the%  variables X, Xval, yval in your environmentload('ex8data2.mat');%  Apply the same steps to the larger dataset[mu sigma2] = estimateGaussian(X);%  Training set p = multivariateGaussian(X, mu, sigma2);%  Cross-validation setpval = multivariateGaussian(Xval, mu, sigma2);%  Find the best threshold[epsilon F1] = selectThreshold(yval, pval);fprintf('Best epsilon found using cross-validation: %e\n', epsilon);fprintf('Best F1 on Cross Validation Set:  %f\n', F1);fprintf('# Outliers found: %d\n', sum(p < epsilon));fprintf('   (you should see a value epsilon of about 1.38e-18)\n\n');pause

二、ex8_cofi.m

%% Machine Learning Online Class%  Exercise 8 | Anomaly Detection and Collaborative Filtering%%  Instructions%  ------------%%  This file contains code that helps you get started on the%  exercise. You will need to complete the following functions:%%     estimateGaussian.m%     selectThreshold.m%     cofiCostFunc.m%%  For this exercise, you will not need to change any code in this file,%  or any other files other than those mentioned above.%%% =============== Part 1: Loading movie ratings dataset ================%  You will start by loading the movie ratings dataset to understand the%  structure of the data.%  fprintf('Loading movie ratings dataset.\n\n');%  Load dataload ('ex8_movies.mat');%  Y is a 1682x943 matrix, containing ratings (1-5) of 1682 movies on %  943 users%%  R is a 1682x943 matrix, where R(i,j) = 1 if and only if user j gave a%  rating to movie i%  From the matrix, we can compute statistics like average rating.fprintf('Average rating for movie 1 (Toy Story): %f / 5\n\n', ...        mean(Y(1, R(1, :))));%  We can "visualize" the ratings matrix by plotting it with imagescimagesc(Y);ylabel('Movies');xlabel('Users');fprintf('\nProgram paused. Press enter to continue.\n');pause;%% ============ Part 2: Collaborative Filtering Cost Function ===========%  You will now implement the cost function for collaborative filtering.%  To help you debug your cost function, we have included set of weights%  that we trained on that. Specifically, you should complete the code in %  cofiCostFunc.m to return J.%  Load pre-trained weights (X, Theta, num_users, num_movies, num_features)load ('ex8_movieParams.mat');%  Reduce the data set size so that this runs fasternum_users = 4; num_movies = 5; num_features = 3;X = X(1:num_movies, 1:num_features);Theta = Theta(1:num_users, 1:num_features);Y = Y(1:num_movies, 1:num_users);R = R(1:num_movies, 1:num_users);%  Evaluate cost functionJ = cofiCostFunc([X(:) ; Theta(:)], Y, R, num_users, num_movies, ...               num_features, 0);           fprintf(['Cost at loaded parameters: %f '...         '\n(this value should be about 22.22)\n'], J);fprintf('\nProgram paused. Press enter to continue.\n');pause;%% ============== Part 3: Collaborative Filtering Gradient ==============%  Once your cost function matches up with ours, you should now implement %  the collaborative filtering gradient function. Specifically, you should %  complete the code in cofiCostFunc.m to return the grad argument.%  fprintf('\nChecking Gradients (without regularization) ... \n');%  Check gradients by running checkNNGradientscheckCostFunction;fprintf('\nProgram paused. Press enter to continue.\n');pause;%% ========= Part 4: Collaborative Filtering Cost Regularization ========%  Now, you should implement regularization for the cost function for %  collaborative filtering. You can implement it by adding the cost of%  regularization to the original cost computation.%  %  Evaluate cost functionJ = cofiCostFunc([X(:) ; Theta(:)], Y, R, num_users, num_movies, ...               num_features, 1.5);           fprintf(['Cost at loaded parameters (lambda = 1.5): %f '...         '\n(this value should be about 31.34)\n'], J);fprintf('\nProgram paused. Press enter to continue.\n');pause;%% ======= Part 5: Collaborative Filtering Gradient Regularization ======%  Once your cost matches up with ours, you should proceed to implement %  regularization for the gradient. %%  fprintf('\nChecking Gradients (with regularization) ... \n');%  Check gradients by running checkNNGradientscheckCostFunction(1.5);fprintf('\nProgram paused. Press enter to continue.\n');pause;%% ============== Part 6: Entering ratings for a new user ===============%  Before we will train the collaborative filtering model, we will first%  add ratings that correspond to a new user that we just observed. This%  part of the code will also allow you to put in your own ratings for the%  movies in our dataset!%movieList = loadMovieList();%  Initialize my ratingsmy_ratings = zeros(1682, 1);% Check the file movie_idx.txt for id of each movie in our dataset% For example, Toy Story (1995) has ID 1, so to rate it "4", you can setmy_ratings(1) = 4;% Or suppose did not enjoy Silence of the Lambs (1991), you can setmy_ratings(98) = 2;% We have selected a few movies we liked / did not like and the ratings we% gave are as follows:my_ratings(7) = 3;my_ratings(12)= 5;my_ratings(54) = 4;my_ratings(64)= 5;my_ratings(66)= 3;my_ratings(69) = 5;my_ratings(183) = 4;my_ratings(226) = 5;my_ratings(355)= 5;fprintf('\n\nNew user ratings:\n');for i = 1:length(my_ratings)    if my_ratings(i) > 0         fprintf('Rated %d for %s\n', my_ratings(i), ...                 movieList{i});    endendfprintf('\nProgram paused. Press enter to continue.\n');pause;%% ================== Part 7: Learning Movie Ratings ====================%  Now, you will train the collaborative filtering model on a movie rating %  dataset of 1682 movies and 943 users%fprintf('\nTraining collaborative filtering...\n');%  Load dataload('ex8_movies.mat');%  Y is a 1682x943 matrix, containing ratings (1-5) of 1682 movies by %  943 users%%  R is a 1682x943 matrix, where R(i,j) = 1 if and only if user j gave a%  rating to movie i%  Add our own ratings to the data matrixY = [my_ratings Y];R = [(my_ratings ~= 0) R];%  Normalize Ratings[Ynorm, Ymean] = normalizeRatings(Y, R);%  Useful Valuesnum_users = size(Y, 2);num_movies = size(Y, 1);num_features = 10;% Set Initial Parameters (Theta, X)X = randn(num_movies, num_features);Theta = randn(num_users, num_features);initial_parameters = [X(:); Theta(:)];% Set options for fmincgoptions = optimset('GradObj', 'on', 'MaxIter', 100);% Set Regularizationlambda = 10;theta = fmincg (@(t)(cofiCostFunc(t, Y, R, num_users, num_movies, ...                                num_features, lambda)), ...                initial_parameters, options);% Unfold the returned theta back into U and WX = reshape(theta(1:num_movies*num_features), num_movies, num_features);Theta = reshape(theta(num_movies*num_features+1:end), ...                num_users, num_features);fprintf('Recommender system learning completed.\n');fprintf('\nProgram paused. Press enter to continue.\n');pause;%% ================== Part 8: Recommendation for you ====================%  After training the model, you can now make recommendations by computing%  the predictions matrix.%p = X * Theta';my_predictions = p(:,1) + Ymean;movieList = loadMovieList();[r, ix] = sort(my_predictions, 'descend');fprintf('\nTop recommendations for you:\n');for i=1:10    j = ix(i);    fprintf('Predicting rating %.1f for movie %s\n', my_predictions(j), ...            movieList{j});endfprintf('\n\nOriginal ratings provided:\n');for i = 1:length(my_ratings)    if my_ratings(i) > 0         fprintf('Rated %d for %s\n', my_ratings(i), ...                 movieList{i});    endend

三、estimateGaussian.m


function [mu sigma2] = estimateGaussian(X)%ESTIMATEGAUSSIAN This function estimates the parameters of a %Gaussian distribution using the data in X%   [mu sigma2] = estimateGaussian(X), %   The input X is the dataset with each n-dimensional data point in one row%   The output is an n-dimensional vector mu, the mean of the data set%   and the variances sigma^2, an n x 1 vector% % Useful variables[m, n] = size(X);% You should return these values correctlymu = zeros(n, 1);sigma2 = zeros(n, 1);% ====================== YOUR CODE HERE ======================% Instructions: Compute the mean of the data and the variances%               In particular, mu(i) should contain the mean of%               the data for the i-th feature and sigma2(i)%               should contain variance of the i-th feature.%mu = mean(X);sigma2 = var(X,opt=1);% =============================================================end


四、selectThreshold.m

function [bestEpsilon bestF1] = selectThreshold(yval, pval)%SELECTTHRESHOLD Find the best threshold (epsilon) to use for selecting%outliers%   [bestEpsilon bestF1] = SELECTTHRESHOLD(yval, pval) finds the best%   threshold to use for selecting outliers based on the results from a%   validation set (pval) and the ground truth (yval).%bestEpsilon = 0;bestF1 = 0;F1 = 0;stepsize = (max(pval) - min(pval)) / 1000;for epsilon = min(pval):stepsize:max(pval)        % ====================== YOUR CODE HERE ======================    % Instructions: Compute the F1 score of choosing epsilon as the    %               threshold and place the value in F1. The code at the    %               end of the loop will compare the F1 score for this    %               choice of epsilon and set it to be the best epsilon if    %               it is better than the current choice of epsilon.    %                   % Note: You can use predictions = (pval < epsilon) to get a binary vector    %       of 0's and 1's of the outlier predictions    predictions = (pval < epsilon);    truePositives  = sum((predictions == 1) & (yval == 1));    falsePositives = sum((predictions == 1) & (yval == 0));    falseNegatives = sum((predictions == 0) & (yval == 1));    precision = truePositives / (truePositives + falsePositives);    recall = truePositives / (truePositives + falseNegatives);        F1 = (2 * precision * recall) / (precision + recall);     % =============================================================    if F1 > bestF1       bestF1 = F1;       bestEpsilon = epsilon;    endendend

五、cofiCostFunc.m

function [J, grad] = cofiCostFunc(params, Y, R, num_users, num_movies, ...                                  num_features, lambda)%COFICOSTFUNC Collaborative filtering cost function%   [J, grad] = COFICOSTFUNC(params, Y, R, num_users, num_movies, ...%   num_features, lambda) returns the cost and gradient for the%   collaborative filtering problem.%% Unfold the U and W matrices from paramsX = reshape(params(1:num_movies*num_features), num_movies, num_features);Theta = reshape(params(num_movies*num_features+1:end), ...                num_users, num_features);            % You need to return the following values correctlyJ = 0;X_grad = zeros(size(X));Theta_grad = zeros(size(Theta));% ====================== YOUR CODE HERE ======================% Instructions: Compute the cost function and gradient for collaborative%               filtering. Concretely, you should first implement the cost%               function (without regularization) and make sure it is%               matches our costs. After that, you should implement the %               gradient and use the checkCostFunction routine to check%               that the gradient is correct. Finally, you should implement%               regularization.%% Notes: X - num_movies  x num_features matrix of movie features%        Theta - num_users  x num_features matrix of user features%        Y - num_movies x num_users matrix of user ratings of movies%        R - num_movies x num_users matrix, where R(i, j) = 1 if the %            i-th movie was rated by the j-th user%% You should set the following variables correctly:%%        X_grad - num_movies x num_features matrix, containing the %                 partial derivatives w.r.t. to each element of X%        Theta_grad - num_users x num_features matrix, containing the %                     partial derivatives w.r.t. to each element of Theta%errors = (X*Theta' - Y) .* R;regularizationTheta = lambda/2 * sum(sum(Theta.^2));regularizationX = lambda/2 * sum(sum(X.^2));J = 1/2 * sum(sum(errors .^2)) + regularizationTheta + regularizationX;X_grad = errors * Theta + lambda * X;Theta_grad = errors' * X + lambda * Theta;% =============================================================grad = [X_grad(:); Theta_grad(:)];end


1 0