Introduction to SVM

来源:互联网 发布:手机仓库软件免费版 编辑:程序博客网 时间:2024/05/16 00:32

SVM has its roots in statical learning theory and has shown promising empircal

results in many practical applications, from handwritten digit recognition to

text categorization.
 
A linear SVM is a classifier that searches for a hyperplane with the largest

margin, which is why it is often known as a maxmal margin classifier.

what is maxmal margin?

 Decision boundaries with large margins tend to have better generalizaion

errors than those with small margins. Learn the best hyperplane.

 SRM - structural risk minimization is another way, beside MDL, to express

generalization error as a tradeoff between training error and model complexity.
It can be used to explain the margin of a linear classifier.

what is linear classifier?
A linear classifier achieves classification task based on the value of the

linear combination of the features.

example:   w*x+b=y

How to learn a SVM model ?
 Using Lagrange multiplier method to learn the convex optimization problem of

the minmial value of f(w).

Advantage of SVM:
1 Good at high demensional problems
2 Gogal minimun of errors. Not like the other rule based classifiers and neural

network who employ greedy-based strategy to search the hypothesis space. Such

methods tend to find only locally optimum solution.
3 Enpower by the existing mathematical soultions.

Issues of SVM:
1 Need transformation for categorical data.
2 Need method to slove muti class problem. (see section 5.8)

原创粉丝点击