机器学习实战-k近邻分类

来源:互联网 发布:java技能描述怎么写 编辑:程序博客网 时间:2024/05/16 01:36

k-近邻算法(KNN)

一。工作原理

存在一个样本数据集合,即训练样本集,并且样本集中每个数据都存在标签(样本集中每一数据与所属分类的对应关系),

输入没有标签的新数据后,将新数据的每个特征与样本集中数据对应的特征进行比较,然后算法提取样本集中特征最相似数据的分类标签,

一般只选择样本数据集中前k个最相似结果,通常k是不大于20的整数,将k个最相似数据中出现次数最多的分类,作为新数据的分类。

二。一般流程

1.收集数据

2.准备数据,计算所需要的数值

3.分析数据

4.训练数据

5.测试数据

6.使用算法:首先输入样本数据和结构化的输出结果,然后运行k-近邻算法判定输入数据分别属于哪个分类,最后应用

对计算出的分类执行后续的处理。

 三。代码

<pre name="code" class="python">from numpy import *import operatorimport sysimport matplotlibimport matplotlib.pyplot as pltdef createDataSet() :        group = array([[1.0,1.1],[1.0,1.0],[0,0],[0,0.1]])        labels = ['A','A','B','B']        return group, labelsdef file2matrix(filename) :        fr = open(filename)        arrayOLines = fr.readlines()        #get the line numbers        numberOfLines = len(arrayOLines)        returnMat = zeros((numberOfLines, 3))        classLabelVector = []        idx=0        for line in arrayOLines :                line = line.strip()                listFromLine = line.split('\t')                returnMat[idx:] = listFromLine[0:3]                classLabelVector.append(int(listFromLine[-1]))                idx += 1        return returnMat, classLabelVectordef autoNorm(dataSet) :        minVals = dataSet.min(0)        maxVals = dataSet.max(0)        ranges = maxVals - minVals        normDataSet = zeros(shape(dataSet))        m = dataSet.shape[0]        normDataSet = dataSet - tile(minVals,(m,1))        normDataSet = normDataSet/tile(ranges,(m,1))        return normDataSet, ranges, minValsdef classify0(inX, dataSet, labels, k) :        dataSetSize = dataSet.shape[0]        #calculate the distance between the inx and other traning data        diffMat = tile(inX, (dataSetSize,1)) - dataSet #titl for array        sqDiffMat = diffMat**2        sqDistances = sqDiffMat.sum(axis=1) #calculate the sum        print "sqDistances=",sqDistances        distances = sqDistances**0.5        print "distances=",distances        sortedDistIndicies = distances.argsort()        print "sorted=",sortedDistIndicies#find the k nearest neighbours        classCount = {}        for i in range(k) :                voteIlabel = labels[sortedDistIndicies[i]]                classCount[voteIlabel] = classCount.get(voteIlabel,0) +1        sortedClassCount = sorted(classCount.iteritems(), key = operator.itemgetter(1), reverse = True)        print sortedClassCount        return sortedClassCount[0][0]def classfiPerson(filename) :        resultList = ['not at all', 'in small doses', 'in large doses']        percentTats = float(raw_input("percentage of time spent playing video games?"))        ffMiles = float(raw_input("frequent flier miles per year?"))        iceCream = float(raw_input("ice cream consumed per year?"))        inArr = array([ffMiles, percentTats, iceCream])        datingDataMat, datingLabels = file2matrix(filename)        normMat, ranges, minVals = autoNorm(datingDataMat)        classifierResult = classify0((inArr-minVals)/ranges, normMat, datingLabels, 3)        print "result=", resultList[classifierResult-1]if __name__=='__main__' :        #group, labels = createDataSet()        #classify0([0,0],group, labels, 3)        filename = sys.argv[1]        returnMat, classLabelVector = file2matrix(filename)        #print returnMat        #print classLabelVector        #normMat, ranges, minVals = autoNorm(returnMat)        #print ranges        #print minVals        #print normMat        #fig = plt.figure()        #ax = fig.add_subplot(111)        #ax.scatter(returnMat[:,1],returnMat[:,2], 15.0*array(classLabelVector), 15.0*array(classLabelVector))        #plt.show()        classfiPerson(filename)


(1)调用tile, 复制size个测试变量,与邻居求差值

(2)差值平方,调用sum求和,计算每个的距离

(3)距离按照由小到达排序

(4)遍历前k个距离最近的点,统计分类,找出分类数最大的作为测试变量的分类


四。执行

在终端执行: python KNN.py




0 0
原创粉丝点击