Machine Learning Stanford (week 2)
来源:互联网 发布:怎么看软件下载量 编辑:程序博客网 时间:2024/06/05 23:01
1. Multivariate Linear Regression
1.1 Multiple Features
Note: [7:25 - θT is a 1 by (n+1) matrix and not an (n+1) by 1 matrix]
Linear regression with multiple variables is also known as “multivariate linear regression”.
We now introduce notation for equations where we can have any number of input variables.
The multivariable form of the hypothesis function accommodating these multiple features is as follows:
In order to develop intuition about this function, we can think about θ0 as the basic price of a house, θ1 as the price per square meter, θ2 as the price per floor, etc. x1 will be the number of square meters in the house, x2 the number of floors, etc.
Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:
This is a vectorization of our hypothesis function for one training example; see the lessons on vectorization to learn more.
Remark: Note that for convenience reasons in this course we assume
1.2 Gradient Descent For Multiple Variables
The gradient descent equation itself is generally the same form; we just have to repeat it for our ‘n’ features:
In other words:
The following image compares gradient descent with one variable to gradient descent with multiple variables:
1.3 Gradient Descent in Practice I - Feature Scaling
Note: [6:20 - The average size of a house is 1000 but 100 is accidentally written instead]
We can speed up gradient descent by having each of our input values in roughly the same range. This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.
The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:
−1 ≤
or
−1 ≤
These aren’t exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.
Two techniques to help with this are feature scaling and mean normalization. Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of the input variable, resulting in a new range of just 1. Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero. To implement both of these techniques, adjust your input values as shown in this formula:
Where
Note that dividing by the range, or dividing by the standard deviation, give different results. The quizzes in this course use range - the programming exercises use standard deviation which is shown as below:
For example, if xi represents housing prices with a range of 100 to 2000 and a mean value of 1000, then,
1.4 Gradient Descent in Practice II - Learning Rate
Note: [5:20 - the x -axis label in the right graph should be θ rather than No. of iterations ]
Debugging gradient descent. Make a plot with number of iterations on the x-axis. Now plot the cost function, J(θ) over the number of iterations of gradient descent. If J(θ) ever increases, then you probably need to decrease α.
Automatic convergence test. Declare convergence if J(θ) decreases by less than E in one iteration, where E is some small value such as
It has been proven that if learning rate α is sufficiently small, then J(θ) will decrease on every iteration.
trying this quetion!!!
To summarize:
If α is too small: slow convergence.
If α is too large: may not decrease on every iteration and thus may not converge.
So you can try the alpha just like this:
1.5 Features and Polynomial Regression
We can improve our features and the form of our hypothesis function in a couple different ways.
We can combine multiple features into one. For example, we can combine x1 and x2 into a new feature x3 by taking
Polynomial Regression
Our hypothesis function need not be linear (a straight line) if that does not fit the data well.
We can change the behavior or curve of our hypothesis function by making it a quadratic, cubic or square root function (or any other form).
For example, if our hypothesis function is
In the cubic version, we have created new features
To make it a square root function, we could do:
One important thing to keep in mind is, if you choose your features this way then feature scaling becomes very important.
eg. if
2 Computing Parameters Analytically
Normal Equation
Note: [8:00 to 8:44 - The design matrix X (in the bottom right side of the slide) given in the example should have elements x with subscript 1 and superscripts varying from 1 to m because for all m training sets there are only 2 features x0 and x1. 12:56 - The X matrix is m by (n+1) and NOT n by n. ]
Gradient descent gives one way of minimizing J. Let’s discuss a second way of doing so, this time performing the minimization explicitly and without resorting to an iterative algorithm. In the “Normal Equation” method, we will minimize J by explicitly taking its derivatives with respect to the θj ’s, and setting them to zero. This allows us to find the optimum theta without iteration. The normal equation formula is given below:
There is no need to do feature scaling with the normal equation.
The following is a comparison of gradient descent and the normal equation:
Works well when n is large Slow if n is very large
With the normal equation, computing the inversion has complexity O(n3). So if we have a very large number of features, the normal equation will be slow. In practice, when n exceeds 10,000 it might be a good time to go from a normal solution to an iterative process.
about the formula:
2.2 Normal Equation Noninvertibility
When implementing the normal equation in octave we want to use the ‘pinv’ function rather than ‘inv.’ The ‘pinv’ function will give you a value of θ even if
If
- Redundant features, where two features are very closely related (i.e. they are linearly dependent)
- Too many features (e.g. m ≤ n). In this case, delete some features or use “regularization” (to be explained in a later lesson).
Solutions to the above problems include deleting a feature that is linearly dependent with another or deleting one or more features when there are too many features.
平均值为 (7921+5184+8836+4761)/4=6675.5
Max-Min=8836-4761=4075
(4761-6675.5)/4075=-0.46957
保留两位小数为-0.47
当时粗心做了个0.47 ORZ
最后完成从for循环到向量(vector)的过渡,这对机器学习的速度有很明显的提升
- Machine Learning Stanford (week 2)
- [Coursera][Stanford] Machine Learning Week 1 2
- Stanford Machine Learning 学习笔记(Week 2)
- Machine Learning Stanford (week 1)
- Machine Learning Stanford (week 3)
- [Coursera][Stanford] Machine Learning Week 3
- [Coursera][Stanford] Machine Learning Week 4
- [Coursera][Stanford] Machine Learning Week 5
- Machine Learning Week 2
- Stanford Andrew Ng ——Machine Learning WEEK 1
- Stanford Machine Learning: (2). Logistic_Regression
- Machine Learning Week 2 ex1
- Machine Learning (Stanford)
- stanford machine learning 笔记
- Stanford Machine Learning ex1
- Stanford Machine Learning
- Machine Learning week 2 quiz: Octave Tutorial
- Coursera Machine Learning Note - Week 2
- Python 读取文件夹将里面的图片处理成想要的大小并保存在个指定位置
- 6.2 Servlet跳转之重定向(Redirect)
- 为什么升级iOS 10.3这么慢,升完还多出很多空间?都是因为苹果的新文件系统
- 颠覆电子白板,神画TT触控投影发布会见证创客神话
- 背包问题
- Machine Learning Stanford (week 2)
- 编译原理与编译构造 预测分析程序的构造
- PC处理器需求不振,英特尔也要代工ARM芯片了
- 工具使用--第2节 HEXO+GitHub 搭建个人博客
- Linux操作系统 第三次实验
- 基础学习第四弹
- API Namespace transport
- 子矩阵的种种做法
- Python 函数的基本使用