机器学习实战——K-近邻算法(读书笔记)

来源:互联网 发布:淘宝的追评在哪里看到 编辑:程序博客网 时间:2024/05/21 10:33

K-近邻算法(Python)

def classify0(inx, dataSet, labels, k):    """    :param inx: 测试样本集合    :param dataSet: 训练样本集合    :param labels: 标签集合    :param k: 选取k个相似度最高    :return: 返回测试样本标签    """    #距离计算    dataSetSize = dataSet.shape[0] #返回数组大小 group为4*2 shape=(4,2)    diffMax = np.tile(inx, (dataSetSize,1)) -dataSet #tile 根据训练数据大小生成 矩阵减法    sqDiffMax = diffMax**2 #平方    sqDistances = sqDiffMax.sum(axis=1) #行坐标累加    distances = sqDistances**0.5 #开根号    #排序    sortedDisIndicies = distances.argsort() #排序从小到大 返回索引    #获取距离最小的K个点    classCount = {}    for i in range(k):        voteIlabel = labels[sortedDisIndicies[i]] #根据下标获取标签        classCount[voteIlabel] = classCount.get(voteIlabel,0)+1 #累计标签集合次数    #集合K根据次数排序    sortedClassCount = sorted(classCount.items(), key = operator.itemgetter(1), reverse = True)    return sortedClassCount[0][0]

构造训练样本集合 标签集合
def createDataSet():    group = np.array([[1.0,1.1],[1.0,1.0],[0,0],[0,0.1]])    labels = ['A', 'A', 'B','B']    return group, labels


测试调用
group,labels = knn.createDataSet()res = knn.classify0([0,0],group,labels,3)

文件读取:1000行数据 前三列表示训练样本 最后一列为标签
def file2maxtrix(filename):    fr = open(filename)    arrayOLines = fr.readlines()     numberOfLines = len(arrayOLines) #获取文本行数    returnMax = np.zeros((numberOfLines, 3))#生成1000*3 的 零矩阵    classLabelVector = []    index = 0    for line in arrayOLines:        line = line.strip()        listFromLine = line.split('\t')        returnMax[index,:] = listFromLine[0:3] #前三列是训练样本数据        classLabelVector.append(int(listFromLine[-1])) #最后一列是标签集合        index += 1    return returnMax, classLabelVector

读取文件画图,在数据处理之前,通过图形化分析数据特征 数据通过https://www.manning.com/books/machine-learning-in-action sourcecode下载
filname = 'C:/javacode/machinelearninginaction/Ch02/datingTestSet2.txt'datingDateMat, datingLabels = knn.file2maxtrix(filname)print('datingDateMat: {}'.format(datingDateMat))fig = plt.figure()ax = fig.add_subplot(111)ax.scatter(datingDateMat[:,0], datingDateMat[:,1], c=datingLabels)#c这个参数是表示颜色 根据标签分不同颜色展示 datingLabels数组 plt.show()
如图


计算样本数据之前,需要对特征值做归一化

def autoNorm (dataSet):    """           归一化数据    """    minVals = dataSet.min(0)    maxVals = dataSet.max(0)    ranges = maxVals - minVals    normDataSet = np.zeros(np.shape(dataSet))    m = dataSet.shape[0]                     #1000 读取矩阵长度    normDataSet = dataSet - np.tile(minVals, (m,1)) #tile 生成最小值矩阵 根据训练数据大小生成 矩阵减法 减去最小值    #print(np.tile(minVals, (m,1)))    normDataSet = normDataSet/np.tile(ranges, (m,1)) #除以(最大值-最小值)    return normDataSet, ranges, minVals

测试代码

def datingClassTest(filname):    hoRatio = 0.10    datingDateMat, datingLabels = knn.file2maxtrix(filname)#读取训练数据    normMat, ranges, minVals = knn.autoNorm(datingDateMat)#归一化训练数据    m = normMat.shape[0] #返回矩阵行数    numTestVecs = int (m*hoRatio)    errorCount = 0.0    for i in range(numTestVecs):         #前100作为验证样本 后900作为训练样本        classifierResult = classify0(normMat[i, :], normMat[numTestVecs:m, :], datingLabels[numTestVecs:m],3)        print("the classifier came back with:%d, the real answer is : %d" % (classifierResult, datingLabels[i]))                if classifierResult != datingLabels[i] :#计算错误率            errorCount += 1.0        print("the total error rate is:%f"%(errorCount/float(numTestVecs)))    return

输出:
the classifier came back with:3, the real answer is : 3the classifier came back with:2, the real answer is : 2the classifier came back with:1, the real answer is : 1the classifier came back with:1, the real answer is : 1the classifier came back with:1, the real answer is : 1the classifier came back with:1, the real answer is : 1……the classifier came back with:3, the real answer is : 1the total error rate is:0.050000
100个样本错了5个 错误率为0.05


根据用户输入返回分类结果

def classifyPerson(filname):    resultList = ['not at all', 'in small doses', 'in large doses']    percentTats = float(input('percentage of time spent playing video games?'))    ffMiles = float(input('frequent flier miles earned per year?'))    iceCream = float(input('liters of ice cream consumed per year?'))        datingDateMat, datingLabels = knn.file2maxtrix(filname)#读取训练数据    normMat, ranges, minVals = knn.autoNorm(datingDateMat)#归一化训练数据        inArr = np.array([percentTats, ffMiles, iceCream])    inArrNorm = inArr-minVals/ranges #对测试数据归一化    classifierResult = knn.classify0(inArrNorm,normMat,datingLabels,3)        print("you will probably like this person:",resultList[classifierResult-1])        return

输入输出内容

percentage of time spent playing video games?10frequent flier miles earned per year?10000liters of ice cream consumed per year?0.5you will probably like this person: not at all



原创粉丝点击