Coursera-Machine Learning(第二周 Linear Regression)
来源:互联网 发布:js获取用户ip地址 编辑:程序博客网 时间:2024/05/17 04:13
部分代码
第二周 Linear Regression
computeCost
predictions = X * theta;J = 1/(2*m)*(predictions - y)'*(predictions - y);
gradientDescent
temp1 = theta(1) - (alpha / m) * sum((X * theta - y).* X(:,1));temp2 = theta(2) - (alpha / m) * sum((X * theta - y).* X(:,2));theta(1) = temp1;theta(2) = temp2;J_history(iter) = computeCost(X, y, theta);
featureNormalize
mu = mean(X);sigma = std(X);for i=1:size(mu,2) X_norm(:,i) = (X(:,i).-mu(i))./sigma(i);endcomputeCostMultipredictions = X * theta;J = 1/(2*m)*(predictions - y)' * (predictions - y);
gradientDescentMulti
for i=1:size(X,2) temp(i) = theta(i) - (alpha / m) * sum((X * theta - y).* X(:,i));endfor j=1:size(X,2) theta(j) = temp(j);end
Normal Equation
theta = pinv(X'*X)*X'*y;
Ex2
1.1 Visualizing the data
% Find Indices of Positive and Negative Examplespos = find(y==1); neg = find(y == 0); % 对应0/1的相应地址向量% Plot Examplesplot(X(pos, 1), X(pos, 2), 'k+','LineWidth', 2, ...'MarkerSize', 7);plot(X(neg, 1), X(neg, 2), 'ko', 'MarkerFaceColor', 'y', ...'MarkerSize', 7);
SIGMOID function
g = 1./(ones(size(z))+e.^(-z));
1.2.2 Cost function and gradient
H=sigmoid(X*theta);J = 1/m * sum((-y.*log(H)).-((1- y).*log(1 - H)));grad = 1/m * ((H.-y)'*X)';
Evaluating logistic regression
p_predict = sigmoid(X * theta);p_point = find(p_predict >= 0.5);p(p_point,1) =1;
2 Regularized logistic regression
H = sigmoid(X*theta);theta2 = theta(2:length(theta),1);J = 1/m * sum((-y.*log(H)).-((1- y).*log(1 - H))) + lambda/(2*m) * sum(theta2.^2);grad = 1/m * ((H.-y)'*X)';grad = [grad(1,1);(grad + lambda/m*theta)(2:length(theta),1)];
2.5 Optional (ungraded) exercises
Notice the changes in the decision boundary as you vary lambda. With a small lambda, you should nd that the classier gets almost every training example correct, but draws a very complicated boundary, thus overtting the data.
作业:Multi-class Classification and Neural Networks
1.3.3 Vectorizing regularized logistic regression
H = sigmoid(X * theta);J = sum(-y.*log(H) - (1-y).*log(1-H))/m + lambda/(2*m) * sum(theta(2:end).^2);grad = X' * (H-y)/m;grad = [grad(1);(grad + lambda/m*theta)(2:length(theta))];1.4 One-vs-all Classicationfor k=1:num_labels initial_theta = zeros(n + 1, 1); options = optimset('GradObj', 'on', 'MaxIter', 50); [theta] = fmincg (@(t)(lrCostFunction(t, X, (y == k), lambda)),initial_theta, options); all_theta(k,:) = theta';end
1.4.1 One-vs-all Prediction
[c,i] = max(sigmoid(X * all_theta'), [], 2);p = i;
2 Neural Networks
X = [ones(m, 1) X];tem1 = Theta1 * X';ans1 = sigmoid(tem1);tem2 = Theta2 * [ones(1, m);ans1];ans2 = sigmoid(tem2);[c,i] = max(ans2', [], 2);p = i;
Programming Exercise 4: Neural Networks Learning
ans1 = [ones(m, 1) X];tem2 = ans1 * Theta1';ans2 = sigmoid(tem2);tem3 = [ones(m, 1) ans2] * Theta2';H = sigmoid(tem3);yy = zeros(m, num_labels);for i = 1:m yy(i,y(i)) = 1;endJ = 1/m * sum(sum(-yy.*log(H)-(1-yy).*log(1-H))) + lambda/2/m * (sum(sum(Theta1(:,2:end).^2)) + sum(sum(Theta2(:,2:end).^2)));for row = 1:m ans1 = [1 X(row,:)]'; tem2 = Theta1 * ans1; ans2 = [1; sigmoid(tem2)]; tem3 = Theta2 * ans2; ans3 = sigmoid(tem3); delta3 = ans3 - yy'(:, row); delta2 = (Theta2' * delta3) .* sigmoidGradient([1; tem2]); delta2 = delta2(2:end); Theta1_grad = Theta1_grad + delta2 * ans1'; Theta2_grad = Theta2_grad + delta3 * ans2';endTheta1_grad = Theta1_grad ./ m;Theta1_grad(:, 2:end) = Theta1_grad(:, 2:end) + (lambda/m) * Theta1(:, 2:end);Theta2_grad = Theta2_grad ./ m;
0 0
- Coursera-Machine Learning(第二周 Linear Regression)
- Coursera Machine Learning 第二周 quiz Linear Regression with Multiple Variables 习题答案
- Coursera Machine Learning 第二周 quiz Programming Exercise 1: Linear Regression
- Coursera Machine Learning 第二周 quiz Linear Regression with Multiple Variables 习题答案 标签: Andrew NgCour
- NOTE: Coursera-Machine learning-linear regression
- Coursera Machine Learning Week 1.2: Linear Regression.one variable
- 【machine learning】linear regression
- Coursera Machine Learning 第六周 Programming Exercise 5: Regularized Linear Regression and Bias
- Coursera-Machine Learning 笔记(二)Linear Regression with multiple variables
- Coursera《machine learning》--(2)单变量线性回归(Linear Regression with One Variable)
- Machine Learning—Linear Regression
- stanford machine learning, linear regression
- Machine Learning #Lab1# Linear Regression
- Machine Learning Notes - Linear Regression
- Machine Learning - Regularized Linear Regression
- Coursera Machine Learning 第六周编程week6 ex5Regularized Linear Regression and Bias/Variance编程全套满分题目+注释选做
- machine-learning第二周 review中提交 Linear Regression 上机作业相关问题
- Coursera Machine Learning 第二周总结
- HDU1950-Bridging signals(最长上升子序列)
- 百思不得姐项目详细知识点(一)
- 详解--访问HDFS的几种方式
- hdu2167(压缩DP)
- python-开发之路-格式化输入
- Coursera-Machine Learning(第二周 Linear Regression)
- Linux(rhel)根目录下的文件浅析
- Delphi 文件读写
- 事务的基本概念和事务的性质
- Android错误之--'keytool' 不是内部或外部命令,也不是可运行的程序
- 如何培养研发团队的凝聚力
- 学习android 笔记(4):如何获取屏幕的相关属性以及dp 、dip、dpi 、px的概念
- java中的枚举
- C++第7次实验-项目1:素数和回文