Supervised Learning 001: k-Nearest Neighbor

来源:互联网 发布:淘宝买三唑仑输入什么 编辑:程序博客网 时间:2024/05/14 11:03

What is k-Nearest Neighbors(kNN)?

      It is classifying with distance measurements.   基于距离计算的分类方式。

      We have an existing set of example data, we call it training set. All of these data have category/class label so it means we know  what class/category each piece of the data should fall into.  已有样本数据集合或训练数据集合,所有的数据都有类别标签。

      And then we are given a new piece of data without a label, we can compare the new data to the every piece of existing data. We then take the most similar piece of data( the nearest neighbors) and look at their labels.  当新数据到达时,比较新数据和所有样本数据集合的相似度(距离),可以得到最相似的数据和它们的标签。

      We look at the top k most similar piece of data. Lastly, we take a category/class label which appear number is biggest from these k piece of data. 选取前k个相似度最高的数据,选择出现次数最多的标签作为新数据的标签。


General Process of kNN:

    1. Collect Data :  these data include the training data and the undetermined data.  收集训练数据和待判定数据

    2. Prepare Data: prepare the numeric values which are needed for a distance calculation.  A structured data format is best.  准备数据,比如用于计算距离的数据

    3. Analyze Data: calculate the distance between data.  找到算法用以计算数据之间的相似度(距离)

    4. Test Algorithm: test the algorithm and get the error rate.  测试算法得到算法的错误率, 不断修正算法直至错误率可接受

    5. Use Algorithm: when the algorithm is ok, we input the data to the k-NN algorithm and determines which class the input data should belong to. 使用算法来对输入的未分类数据进行判定,得到所属分类。


k-Nearest Neighbors: 优缺点分析

    Pros: High accuracy, insensitive to outliers, no assumptions about data  优点:高精度,对异常值不敏感,无数据输入假定

    Cons: computationally expensive, requires a lot of memory 缺点:计算复杂度高,空间占用大

    Works with : Numeric values, nominal values 使用数据范围: 数值型和有标准值的数值