Batch Gradient Descent

来源:互联网 发布:阿里云服务器视频教程 编辑:程序博客网 时间:2024/05/17 04:01

Batch Gradient Descent

We use linear regression as example to explain this optimization algorithm.

1. Formula

1.1. Cost Function

We prefer residual sum of squared to evaluate linear regression.

J(θ)=12mi=1n[hθ(xi)yi]2

1.2. Visualize Cost Function

E.g. 1 :

one parameter only θ1 –> hθ(x)=θ1x1

Learning Curve 1

1. Learning Curve 1 [1]


E.g. 2 :

two parameters θ0,θ1 –> hθ(x)=θ0+θ1x1

Learning Curve 2

2. Learning Curve 2 [2]


Switch to contour plot

Learning Curve 2 - contour

3. Learning Curve 2 - contour[2]


1.3. Gradient Descent Formula

For all θi

Jθθi=1mi=1n[hθ(xi)yi](xi)

E.g.,
two parameters θ0,θ1 –> hθ(x)=θ0+θ1x1

For i = 0 :

Jθθ0=1mi=1n[hθ(xi)yi](x0)

For i = 1:

Jθθ1=1mi=1n[hθ(xi)yi](x1)

% Octave%% =================== Gradient Descent ===================% Add a column(x0) of ones to XX = [ones(len, 1), data(:,1)];theta = zeros(2, 1);alpha = 0.01;ITERATION = 1500;jTheta = zeros(ITERATION, 1);for iter = 1:ITERATION    % Perform a single gradient descent on the parameter vector    % Note: since the theta will be updated, a tempTheta is needed to store the data.    tempTheta = theta;    theta(1) = theta(1) - (alpha / len) * (sum(X * tempTheta - Y));  % ignore the X(:,1) since the values are all ones.    theta(2) = theta(2) - (alpha / len) * (sum((X * tempTheta - Y) .* X(:,2)));    %% =================== Compute Cost ===================    jTheta(iter) = sum((X * theta - Y) .^ 2) / (2 * len);endfor

2. Algorithm

For all θi

θi:=θiαθiJ(θ1,θ2,,θn)

E.g.,
two parameters θ0,θ1 –> hθ(x)=θ0+θ1x1

For i = 0 :

θ0:=θ0α1mi=1n[hθ(xi)yi]

For i = 1 :

θ1:=θ1α1mi=1n[hθ(xi)yi](x1)

Iterative for multiple times (depends on data content, data size and step size). Finally, we could see the result as below.

Converge
Visualize Convergence

3. Analyze

Pros Cons Controllable by manuplate stepsize, datasize Computing effort is large Easy to program

4. How to Choose Step Size?

Choose an approriate step size is significant. If the step size is too small, it doesn’t hurt the result, but it took even more times to converge. If the step size is too large, it may cause the algorithm diverge (not converge).

The graph below shows that the value is not converge since the step size is too big.

Large Step Size
Large Step Size

The best way, as far as I know, is to decrease the step size according to the iteration times.

E.g.,

α(t+1)=αtt

or

α(t+1)=αtt

Reference

  1. 机器学习基石(台湾大学-林轩田)\lecture_slides-09_handout.pdf

  2. Coursera-Standard Ford CS229: Machine Learning - Andrew Ng

阅读全文
0 0