Machine Learning -Week 1

来源:互联网 发布:全国网络策划企业排名 编辑:程序博客网 时间:2024/05/22 16:05

1 Introduction

What is Machine Learning?

Two definitions of Machine Learning are offered. Arthur Samuel described it as: "the field of study that gives computers the ability to learn without being explicitly programmed." This is an older, informal definition.

Tom Mitchell provides a more modern definition: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."

Example: playing checkers.

  • E = the experience of playing many games of checkers
  • T = the task of playing checkers.
  • P = the probability that the program will win the next game.

Supervised Learning

In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output.

Supervised learning problems are categorized into "regression" and "classification" problems. In aregression problem, we are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function. In a classification problem, we are instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discretecategories.

Example:

Given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a regression problem.

We could turn this example into a classification problem by instead making our output about whether the house "sells for more or less than the asking price." Here we are classifying the houses based on price into two discrete categories.

Unsupervised Learning

Unsupervised learning, on the other hand, allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don't necessarily know the effect of the variables.

We can derive this structure by clustering the data based on relationships among the variables in the data.

With unsupervised learning there is no feedback based on the prediction results, i.e., there is no teacher to correct you. It’s not just about clustering. For example, associative memory is unsupervised learning.

Example:

Clustering: Take a collection of 1000 essays written on the US Economy, and find a way to automatically group these essays into a small number that are somehow similar or related by different variables, such as word frequency, sentence length, page count, and so on.

Associative: Suppose a doctor over years of experience forms associations in his mind between patient characteristics and illnesses that they have. If a new patient shows up then based on this patient’s characteristics such as symptoms, family medical history, physical attributes, mental outlook, etc the doctor associates possible illness or illnesses based on what the doctor has seen before with similar patients. This is not the same as rule based reasoning as in expert systems. In this case we would like to estimate a mapping function from patient characteristics into illnesses.




Linear Regression with One Variable

Model Representation

Recall that in *regression problems*, we are taking input variables and trying to map the output onto a *continuous* expected result function.

Linear regression with one variable is also known as "univariate linear regression."

Univariate linear regression is used when you want to predict a single output value from a single inputvalue. We're doing supervised learning here, so that means we already have an idea what the input/output cause and effect should be.

The Hypothesis Function

Our hypothesis function has the general form:

hθ(x)=θ0+θ1x

We give to hθ values for θ0 and θ1 to get our output 'y'. In other words, we are trying to create a function called hθ that is able to reliably map our input data (the x's) to our output data (the y's).

Example:

x (input)y (output)04172738

Now we can make a random guess about our hθ function: θ0=2 and θ1=2. The hypothesis function becomes hθ(x)=2+2x.

So for input of 1 to our hypothesis, y will be 4. This is off by 3.

Cost Function

We can measure the accuracy of our hypothesis function by using a cost function. This takes an average (actually a fancier version of an average) of all the results of the hypothesis with inputs from x's compared to the actual output y's.

J(θ0,θ1)=12mi=1m(hθ(x(i))y(i))2

To break it apart, it is 12x¯ where x¯ is the mean of the squares of hθ(x(i))y(i), or the difference between the predicted value and the actual value.

This function is otherwise called the "Squared error function", or Mean squared error. The mean is halved (12m) as a convenience for the computation of the gradient descent, as the derivative term of the square function will cancel out the 12 term.

Now we are able to concretely measure the accuracy of our predictor function against the correct results we have so that we can predict new results we don't have.

Gradient Descent

So we have our hypothesis function and we have a way of measuring how accurate it is. Now what we need is a way to automatically improve our hypothesis function. That's where gradient descent comes in.

Imagine that we graph our hypothesis function based on its fields θ0 and θ1 (actually we are graphing the cost function for the combinations of parameters). This can be kind of confusing; we are moving up to a higher level of abstraction. We are not graphing x and y itself, but the guesses of our hypothesis function.

We put θ0 on the x axis and θ1 on the z axis, with the cost function on the vertical y axis. The points on our graph will be the result of the cost function using our hypothesis with those specific theta parameters.

We will know that we have succeeded when our cost function is at the very bottom of the pits in our graph, i.e. when its value is the minimum.

The way we do this is by taking the derivative (the line tangent to a function) of our cost function. The slope of the tangent is the derivative at that point and it will give us a direction to move towards. We make steps down that derivative by the parameter α, called the learning rate.

The gradient descent equation is:

repeat until convergence:

θj:=θjαθjJ(θ0,θ1)

for j=0 and j=1

Intuitively, this could be thought of as:

repeat until convergence:

θj:=θjα[Slope of tangent aka derivative]

Gradient Descent for Linear Regression

When specifically applied to the case of linear regression, a new form of the gradient descent equation can be derived. We can substitute our actual cost function and our actual hypothesis function and modify the equation to (the derivation of the formulas are out of the scope of this course, but a really great one can befound here:

repeat until convergence: {θ0:=θ1:=}θ0α1mi=1m(hθ(x(i))y(i))θ1α1mi=1m((hθ(x(i))y(i))x(i))

where m is the size of the training set, θ0 a constant that will be changing simultaneously with θ1 and x(i),y(i) are values of the given training set (data).

Note that we have separated out the two cases for θj and that for θ1 we are multiplying x(i) at the end due to the derivative.

The point of all this is that if we start with a guess for our hypothesis and then repeatedly

apply these gradient descent equations, our hypothesis will become more and more accurate.

What's Next

Instead of using linear regression on just one input variable, we'll generalize and expand our concepts so that we can predict data with multiple input variables. Also, we'll solve for θ0 and θ1 exactly without needing an iterative function like gradient descent.





Linear Algebra Review

Khan Academy has excellent Linear Algebra Tutorials.

This online Linear Algebra text is also an excellent resource, particularly for a proof of the normal equation.

Matrices and Vectors

Matrices are 2-dimensional arrays:

A=adgjbehkcfil

The above matrix has four rows and three columns, so it is a 4 x 3 matrix.

A vector is a matrix with one column and many rows:

wxyz

So vectors are a subset of matrices. The above vector is a 4 x 1 matrix.

Notation and terms:

Aij refers to the element in the ith row and jth column of matrix A.

* A vector with 'n' rows is referred to as an 'n'-dimensional vector

vi refers to the element in the ith row of the vector.

* In general, all our vectors and matrices will be 1-indexed.

* Matrices are usually denoted by uppercase names while vectors are lowercase.

* "Scalar" means that an object is a single value, not a vector or matrix.

R refers to the set of scalar real numbers

Rn refers to the set of n-dimensional vectors of real numbers

Addition and Scalar Multiplication

Addition and subtraction are element-wise, so you simply add or subtract each corresponding element:

[acbd]+[wyxz]=[a+wc+yb+xd+z]

To add or subtract two matrices, their dimensions must be the same.

In scalar multiplication, we simply multiply every element by the scalar value:

[acbd]x=[axcxbxdx]

Matrix-Vector Multiplication

We map the column of the vector onto each row of the matrix, multiplying each element and summing the result.

acebdf[xy]=ax+bycx+dyex+fy

The result is a vector. The vector must be the second term of the multiplication. The number of rows of the vector must equal the number of columns of the matrix.

An n x m matrix multiplied by an m x 1 vector results in an n x 1 vector.

Matrix-Matrix Multiplication

We multiply two matrices by breaking it into several vector multiplications and concatenating the result

acebdf[wyxz]=aw+bycw+dyew+fyax+bzcx+dzex+fz

An m x n matrix multiplied by an n x o matrix results in an m x o matrix. In the above example, a 3 x 2 matrix times a 2 x 2 matrix resulted in a 3 x 2 matrix.

To multiply two matrices, the number of columns of the first matrix must equal the number of rows of the second matrix.

Matrix Multiplication Properties

* Not commutative. ABBA

* Associative. (AB)C=A(BC)

The "identity matrix", when multiplied by any matrix of the same dimensions, results in the original matrix. It's just like multiplying numbers by 1. The identity matrix simply has 1's on the diagonal and 0's elsewhere.

When multiplying the identity matrix after some matrix, the square identity matrix should match the other matrix's columns. When multiplying the identity matrix before some other matrix, the square identity matrix should match the other matrix's rows.

Inverse and Transpose

The inverse of a matrix A is denoted A1. Multiplying by the inverse results in the identity matrix.

A non square matrix does not have an inverse matrix. We can compute inverses of matrices in octave with the pinv(A) function [1].

The transposition of a matrix is like rotating the matrix once clockwise and then reversing it:

A=acebdf

AT=[abcdef]

In other words:

Aij=ATji

Footnotes

[1]: As described in the course video, this octave function computes the pseudo inverse for singular matrices which do not have inverses.


0 0
原创粉丝点击
热门问题 老师的惩罚 人脸识别 我在镇武司摸鱼那些年 重生之率土为王 我在大康的咸鱼生活 盘龙之生命进化 天生仙种 凡人之先天五行 春回大明朝 姑娘不必设防,我是瞎子 面对调皮的孩子怎么办 宝宝咳嗽无痰怎么办 孩子经常做噩梦怎么办 孩子干咳的厉害怎么办 外遇有了孩子该怎么办 有个无赖父亲怎么办 孩子之间发生争执怎么办 被打耳光后耳鸣怎么办 被打了耳鸣怎么办 打到鼻子流鼻血怎么办 一岁多宝宝有痰怎么办 一岁半咳嗽有痰怎么办 12岁说话不清楚怎么办 梦见前夫打孩子怎么办 小儿咳嗽带痰怎么办 孩子扁桃体发炎咳嗽怎么办 一岁宝宝总有痰怎么办 四岁宝宝总有痰怎么办 5岁宝宝总有痰怎么办 一岁宝宝有痰咳不出怎么办 三岁宝宝有痰咳不出怎么办 孩子不听话打她怎么办 孩子笨上学吃力怎么办 孩子学习就睡着怎么办 孩子对父母大喊大叫怎么办 老师面对熊孩子怎么办 妈妈故意打孩子怎么办 老是被妈妈打怎么办 父母老打骂孩子怎么办 后妈总是欺负我怎么办 小朋友屁股被打紫了怎么办 儿童被咬出血怎么办 小宝宝蚊子咬了怎么办 幼儿被咬了怎么办 把孩子屁股肿打紫怎么办 小孩老是咬小孩怎么办 宝宝对蚊子过敏怎么办 胳膊打红了怎么办 孩子生气摔东西怎么办 爱发脾气摔东西怎么办 宝宝生气扔东西怎么办