Machine Learning Foundations

来源:互联网 发布:2017中美人工智能创投 编辑:程序博客网 时间:2024/05/17 03:11

1.  when can machines learn?

1.2 Learning to answer yes/no

PLAA takes linear separable D and perceptrons H to get hypothesisg


unknown target function (fx --> y)

training examplesD: (x1, y1), ···, (xn, yn)   --->  learning algorithm A  --->  final hypothesis g ≈ f

                                                         (hypothesis set H, H = all possible perceptrons)


Perceptron:

A Simple Hypothesis Set: the ‘Perceptron’: called ‘perceptron’ hypothesis historically

Vector Form of Perceptron Hypothesis: h(x) = sign(w^Tx)

Perceptrons in R^2: perceptrons ⇔ linear (binary) classifiers


Perceptron Learning Algorithm (PLA):

start from some w0 (say, 0), and ‘correct’ its mistakes on D


Linear Separability:

linear separable ⇔  PLA halts (i.e. no more mistakes) exists perfect wf such that yn = sign(wf^T xn)


More about PLA:

As long as linear separable and correct by mistake
• inner product of w
f and wt grows fast; length of wt grows slowly (T <=  (R/p)^2)
• PLA ‘lines’ are more and more aligned with wf ⇒ halts


Learning with Noisy Data --> Pocket Algorithm:

modify PLA algorithm (black lines) by keeping best weights in pocket


原创粉丝点击