机器学习实战2:k近邻算法KNN(python)

来源:互联网 发布:sql like 编辑:程序博客网 时间:2024/05/01 19:55

原理:简单来说,就是采用测量不同特征值之间的距离将其分类,将某点划分到离得最近的那k个点所体现的那一类上。

具体来说,将这样实现:

首先,导入numpy模块,创建数据集:

from numpy import *
import operator

def createDataSet():
group = array([[1.0,1.1],[1.0,1.0],[0,0],[0,0.1]])
labels = [‘A’,’A’,’B’,’B’]
return group, labels

group,labels=createDataSet()

这里写图片描述

k近邻算法分类(inX是输入向量,k为设定的k值):

def classify0(inX, dataSet, labels, k):
dataSetSize = dataSet.shape[0]
diffMat = tile(inX, (dataSetSize,1)) - dataSet
sqDiffMat = diffMat**2
sqDistances = sqDiffMat.sum(axis=1)
distances = sqDistances**0.5
sortedDistIndicies = distances.argsort()
classCount={}
for i in range(k):
voteIlabel = labels[sortedDistIndicies[i]]
classCount[voteIlabel] = classCount.get(voteIlabel,0) + 1
sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True)
return sortedClassCount[0][0]

令inX=[0,0]测试:
inX=[0,0]
classify0(inX,group,labels,3)
这里写图片描述

现在我们有一个文本文件,该如何将其转化为我们能用的格式呢?文本如下:
这里写图片描述

使用file2matrix将其转换:

def file2matrix(filename):
fr = open(filename)
numberOfLines = len(fr.readlines()) #get the number of lines in the file
returnMat = zeros((numberOfLines,3)) #prepare matrix to return
classLabelVector = [] #prepare labels return
fr = open(filename)
index = 0
for line in fr.readlines():
line = line.strip() #将line的前后空格都去掉
listFromLine = line.split(‘\t’)
returnMat[index,:] = listFromLine[0:3]
classLabelVector.append(int(listFromLine[-1]))
index += 1
return returnMat,classLabelVector

import os
os.chdir(‘//home//yuan//machinelearninginaction//Ch02’)
filename=’datingTestSet2.txt’
datingDatMat,datingLabels=file2matrix(filename)
这里写图片描述

利用matplotlib画图:

import matplotlib
import matplotlib.pyplot as plt
fig= plt.figure()
ax=fig.add_subplot(111)
ax.scatter(datingDatMat[:,1],datingDatMat[:,2])
plt.show()
这里写图片描述

将dataSet特征均一化:

def autoNorm(dataSet):
minVals = dataSet.min(0)
maxVals = dataSet.max(0)
ranges = maxVals - minVals
normDataSet = zeros(shape(dataSet))
m = dataSet.shape[0]
normDataSet = dataSet - tile(minVals, (m,1))
normDataSet = normDataSet/tile(ranges, (m,1)) #element wise divide
return normDataSet, ranges, minVals

normMat,ranges,minVals=autoNorm(datingDatMat)
这里写图片描述

利用“datingTestSet.txt”测试程序(这里只测试了十组数据):

def datingClassTest():
hoRatio = 0.50 #hold out 10%
datingDataMat,datingLabels = file2matrix(‘datingTestSet2.txt’) #load data setfrom file
normMat, ranges, minVals = autoNorm(datingDataMat)
m = normMat.shape[0]
numTestVecs = int(m*hoRatio)
errorCount = 0.0
for i in range(10):
classifierResult = classify0(normMat[i,:],normMat[numTestVecs:m,:],datingLabels[numTestVecs:m],3)
print “the classifier came back with: %d, the real answer is: %d” % (classifierResult, datingLabels[i])
if (classifierResult != datingLabels[i]): errorCount += 1.0
print “the total error rate is: %f” % (errorCount/float(numTestVecs))
print errorCount

datingClassTest()
这里写图片描述

0 0