2016.03.30 Supervised learning

来源:互联网 发布:淘宝收货地址写什么 编辑:程序博客网 时间:2024/06/06 11:47

1.As with full Bayesian inference, MAP Bayesian inference has the advantage of leveraging information that is brought by the prior and cannot be found in the training data. This additional information helps to reduce the variance in the MAP point estimate (in comparison to the ML estimate). However, it does so at the price of increased bias.


MAP估计器相比于MLE估计器能降低估计器的方差,使得估计器分布更集中,但由于MAP估计引入了偏好,使得估计的偏增加。



2.The power of kernel trick


The kernel trick is powerful for two reasons. First, it allows us to learn models that are nonlinear as a function of x using convex optimization techniques that are guaranteed to converge efficiently. This is possible because we consider φ fixed and optimize only α, i.e., the optimization algorithm can view the decision function as being linear in a different space. Second, the kernel function k often admits an implementation that is significantly more computational efficient than naively constructing two φ(x) vectors and explicitly taking their dot product.


SVM并不是唯一使用kernel trick的算法,有许多算法通过kernel trick从线性算法推广到非线性算法。所有采用kernel trick的算法统称kernel methods。

3.Kernel methods的主要缺点

A major drawback to kernel machines is that the cost of evaluating the decision function is linear in the number of training examples, because the i-th example contributes a term αik(x, x(i)) to the decision function. Support vector machines are able to mitigate this by learning an α vector that contains mostly zeros.Classifying a new example then requires evaluating the kernel function only for the training examples that have non-zero αi. These training examples are known as support vectors.


Kernel machines also suffer from a high computational cost of training when the dataset is large. We will revisit this idea in Sec. 5.9. Kernel machines with generic kernels struggle to generalize well. We will explain why in Sec. 5.11. The modern incarnation of deep learning was designed to overcome these limitations of kernel machines. The current deep learning renaissance began when Hinton et al.(2006) demonstrated that a neural network could outperform the RBF kernel SVM on the MNIST benchmark.


4.关于K近邻算法


As a non-parametric learning algorithm,k-nearest neighbor can achieve very high capacity. For example,suppose we have a multiclass classification task and measure performance with 0-1 loss. In this setting, 1-nearest neighbor converges to double the Bayes error as the number of training examples approaches infinity. The error in excess of the Bayes error results from choosing a single neighbor by breaking ties between equally distant neighbors randomly. When there is infinite training data, all test points x will have infinitely many training set neighbors at distance zero. If we allow the algorithm to use all of these neighbors to vote, rather than randomly choosing one of them, the procedure converges to the Bayes error rate. The high capacity of k-nearest neighbors allows it to obtain high accuracy given a large training set.




0 0
原创粉丝点击