转行程序员4 机器学习 Regularization 纯属敦促自己学习
来源:互联网 发布:2017网络视听大会 编辑:程序博客网 时间:2024/04/29 05:32
作业 网址 http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&doc=exercises/ex5/ex5.html
所编写的代码
% Exercise 5 -- Linear Regression
close all;clear;clc;
x = load('ex5Linx.dat');
y = load('ex5Liny.dat');
figure;
plot(x,y,'o','MarkerFaceColor', 'r', 'MarkerSize', 8);
m = length(y);
x = [ones(m, 1), x, x.^2, x.^3, x.^4, x.^5];
lamda = 1; % 惩罚系数
matrix = diag([0,ones(1,5)] );
theta = (x'*x+lamda.*matrix)\x'*y;
hold on
plot_x = [min(x(:,2)):0.05:max(x(:,2))+0.2]';
plot_y = [ones(length(plot_x), 1), plot_x, plot_x.^2, plot_x.^3, plot_x.^4, plot_x.^5]*theta;
plot(plot_x,plot_y,'--');
legend('Training data','5thorder fit ')
% Exercise 5 -- Logistic Regression using Newton's method
close all;clear;clc;
x = load('ex5Logx.dat');
y = load('ex5Logy.dat');
figure;
% Find the indices for the 2 classes
pos = find(y); neg = find(y == 0);
plot(x(pos, 1), x(pos, 2), '+')
hold on
plot(x(neg, 1), x(neg, 2), 'o','MarkerFaceColor', 'y', 'MarkerSize', 8)
legend('y=1','y=0');
u = x(:,1);
v = x(:,2);
x = map_feature(u, v);
m=length(y);
g = inline('1.0 ./ (1.0 + exp(-z))');
theta = zeros(28,1);
MAX_ITR = 15;
lamda = 1;
matrix = diag([0,ones(1,27)] );
J = zeros(MAX_ITR, 1);
for i=1:MAX_ITR
zz = x * theta;
h = g(zz);
grad = (1/m).*x' * (h-y)+lamda/m.*matrix*theta;
H = (1/m).*x' * diag(h) * diag(1-h) * x+(lamda/m).*matrix;
theta = theta - H\grad;
J(i) =(1/m)*sum(-y.*log(h) - (1-y).*log(1-h));
end
% Define the ranges of the grid
u = linspace(-1, 1.5, 200);
v = linspace(-1, 1.5, 200);
% Initialize space for the values to be plotted
z = zeros(length(u), length(v));
% Evaluate z = theta*x over the grid
for i = 1:length(u)
for j = 1:length(v)
% Notice the order of j, i here!
z(j,i) = map_feature(u(i), v(j))*theta;
end
end
contour(u,v,z, [0, 0], 'LineWidth', 2)
% Plot J
figure
plot(0:MAX_ITR-1, J, 'o--', 'MarkerFaceColor', 'r', 'MarkerSize', 8)
xlabel('Iteration'); ylabel('J')
- 转行程序员4 机器学习 Regularization 纯属敦促自己学习
- 转行程序员1 机器学习 线性回归 Linear Regression 纯属敦促自己学习
- 转行程序员2 机器学习 线性回归 Linear Regression II 纯属敦促自己学习
- 转行程序员3 机器学习 Logistic Regression 纯属敦促自己学习
- Python 爬虫学习 糗事百科 纯属敦促自己学习
- 机器学习基石-Regularization
- Coursera机器学习课程笔记(4) Regularization
- 机器学习(九)- Regularization
- 《转行机器学习》一、我想转行
- Stanford机器学习 -- Regularization 的学习
- 机器学习教程之4-正则化(Regularization)
- 3、机器学习-Logistic Regression and Regularization
- 机器学习 Regularization and model selection
- [机器学习]overfitting 和regularization
- 机器学习之正则化(Regularization)
- 机器学习复习——Regularization
- Stanford 机器学习笔记 Week3 Regularization
- 机器学习正则化(Regularization)
- 数据库
- require和import的区别
- Android开发:报错Index -1 requested, with a size of 1
- Android drawable微技巧,你所不知道的drawable的那些细节
- 充值画面——金额的选择和输入
- 转行程序员4 机器学习 Regularization 纯属敦促自己学习
- 19、JavaBean、MVC
- centos 6.5环境利用iscsi搭建SAN网络存储服务及服务端target和客户端initiator配置详解
- ROS文件系统概览
- hdu 5671 Matrix(BC——思维题)
- OpenVPN的工作原理
- 机器学习实战 第九章 源码勘误
- java初学者(三)final 关键字
- iOS开发总结之UIView常用属性