读书笔记:机器学习实战(2)——章3的决策树代码和个人理解与注释
来源:互联网 发布:硅藻泥脚垫 知乎 编辑:程序博客网 时间:2024/06/05 20:23
首先是对于决策树的个人理解:
通过寻找最大信息增益(或最小信息熵)的分类特征,从部分已知类别的数据中提取分类规则的一种分类方法。
信息熵:
其中,log底数为2,额,好吧,图片我从百度截的。。
这里只解释到它是一种信息的期望值,深入的请看维基百科
http://zh.wikipedia.org/zh-sg/熵_(信息论)
信息增益:划分数据集前后的信息发生的变化(原书定义)
实际应用想要找到具有最大信息增益的分类树结构,就是使原始数据的信息熵减去分类后的信息熵的差值最大,原始数据的信息熵可以理解为常数,那么想要最大信息增益,也就是要寻找一种分类方法,是按照分类方法分类后的数据集的信息熵最小。(另:也可以选择“不纯度”或“错误率”作为评估参数,不纯度,维基百科下,错误率就是字面意思)
所以找到最优分类树结构代码如下:
def chooseBestFeatureToSplit(dataSet): numFeatures = len(dataSet[0]) - 1 #the last column is used for the labels baseEntropy = calcShannonEnt(dataSet) bestInfoGain = 0.0; bestFeature = -1 for i in range(numFeatures): #iterate over all the features featList = [example[i] for example in dataSet]#create a list of all the examples of this feature uniqueVals = set(featList) #get a set of unique values newEntropy = 0.0 for value in uniqueVals: subDataSet = splitDataSet(dataSet, i, value) prob = len(subDataSet)/float(len(dataSet)) newEntropy += prob * calcShannonEnt(subDataSet) infoGain = baseEntropy - newEntropy #calculate the info gain; ie reduction in entropy if (infoGain > bestInfoGain): #compare this to the best gain so far bestInfoGain = infoGain #if better than current best, set to best bestFeature = i return bestFeature
按照“寻找最大信息增益的方式”,找到对于已知类别的一批数据(训练集)的最优决策树,然后用这个树结构去分类未知数据(测试集),整体代码如下:
#!/usr/bin/env python# coding=utf-8'''Created on Oct 12, 2010Decision Tree Source Code for Machine Learning in Action Ch. 3@author: Peter Harrington'''from math import logimport operatordef createDataSet(): dataSet = [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']] labels = ['no surfacing','flippers'] #change to discrete values return dataSet, labelsdef calcShannonEnt(dataSet): # 计算香侬熵 numEntries = len(dataSet) labelCounts = {} # 存储特征的字典 for featVec in dataSet: #the the number of unique elements and their occurance currentLabel = featVec[-1] # 取最后一个元素,即该组特征的label if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0 # 如果没有,增加新key,value初始化为0 labelCounts[currentLabel] += 1 # 对应key的值累计 shannonEnt = 0.0 for key in labelCounts: prob = float(labelCounts[key])/numEntries shannonEnt -= prob * log(prob,2) #log base 2 # shannon公式:shanonEnt =(负的)求和(i.prob*log(i.prob,2)) return shannonEntdef splitDataSet(dataSet, axis, value): retDataSet = [] for featVec in dataSet: if featVec[axis] == value: reducedFeatVec = featVec[:axis] #chop out axis used for splitting reducedFeatVec.extend(featVec[axis+1:]) # 简单的分片,除去分类特征,余下的添加到容器中 retDataSet.append(reducedFeatVec) return retDataSetdef chooseBestFeatureToSplit(dataSet): numFeatures = len(dataSet[0]) - 1 #the last column is used for the labels baseEntropy = calcShannonEnt(dataSet) bestInfoGain = 0.0; bestFeature = -1 for i in range(numFeatures): #iterate over all the features featList = [example[i] for example in dataSet]#create a list of all the examples of this feature uniqueVals = set(featList) #get a set of unique values newEntropy = 0.0 for value in uniqueVals: subDataSet = splitDataSet(dataSet, i, value) prob = len(subDataSet)/float(len(dataSet)) newEntropy += prob * calcShannonEnt(subDataSet) infoGain = baseEntropy - newEntropy #calculate the info gain; ie reduction in entropy if (infoGain > bestInfoGain): #compare this to the best gain so far bestInfoGain = infoGain #if better than current best, set to best bestFeature = i return bestFeature #returns an integerdef majorityCnt(classList): classCount={} for vote in classList: if vote not in classCount.keys(): classCount[vote] = 0 classCount[vote] += 1 sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True) return sortedClassCount[0][0]def createTree(dataSet,labels): classList = [example[-1] for example in dataSet] if classList.count(classList[0]) == len(classList): return classList[0]#stop splitting when all of the classes are equal if len(dataSet[0]) == 1: #stop splitting when there are no more features in dataSet return majorityCnt(classList) bestFeat = chooseBestFeatureToSplit(dataSet) bestFeatLabel = labels[bestFeat] myTree = {bestFeatLabel:{}} del(labels[bestFeat]) featValues = [example[bestFeat] for example in dataSet] uniqueVals = set(featValues) for value in uniqueVals: subLabels = labels[:] #copy all of labels, so trees don't mess up existing labels myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels) return myTree def classify(inputTree,featLabels,testVec): firstStr = inputTree.keys()[0] secondDict = inputTree[firstStr] featIndex = featLabels.index(firstStr) key = testVec[featIndex] valueOfFeat = secondDict[key] if isinstance(valueOfFeat, dict): classLabel = classify(valueOfFeat, featLabels, testVec) else: classLabel = valueOfFeat return classLabeldef storeTree(inputTree,filename): import pickle fw = open(filename,'w') pickle.dump(inputTree,fw) fw.close()def grabTree(filename): import pickle fr = open(filename) return pickle.load(fr)if __name__ == '__main__': (dataSet, labels) = createDataSet(); print dataSet;print labels shannonEnt = calcShannonEnt(dataSet) print shannonEnt myTree = createTree(dataSet,labels ) print 'mytree:' print myTree (dataSet, labels) = createDataSet(); print classify(myTree,labels, [1,1]) import treePlotter # treePlotter.createPlot(myTree) fr = open('lenses.txt') lenses = [inst.strip().split('\t') for inst in fr.readlines()] lensesLabels = ['age','prescript','astigmatic','tearRate'] lensesTree = createTree(lenses,lensesLabels) print lensesTree treePlotter.createPlot(lensesTree)
部分地方加入了中文注释,原著的那几行英文注释很好就没有再换成中文的。
之前没有详细看过决策树,以为它就是把分类逻辑变为树结构,多个if else,说说个人学习后,对于决策树的理解:
1.还是觉得它就是多个if else,树结构也可以这么理解吧
2.构建的过程或者说收敛条件是:最大信息增益(最小信息熵)
3.优点:可读性强,逻辑简单易懂,计算步骤不超过树的深度;缺点:极易过拟合,得到的树结构泛性不强
4.正因为过度追求最优解,导致决策树往往会过拟合,原著是通过构建以后的合并细小或相近分支,也就是“后置裁剪”,但是这样时间上有浪费,代表的是K-Fold Cross Validation,不断裁剪,评估当前的错误率,有点类似于整体求解后,再反过来找恰当的“early stop”;另一种就是著名的随机森林,系统复杂了,效果确实会好,额,随机森林具体的以后深度学习下补上。
下面是原著利用matplolib画图的代码:
'''Created on Oct 14, 2010@author: Peter Harrington'''import matplotlib.pyplot as pltdecisionNode = dict(boxstyle="sawtooth", fc="0.8")leafNode = dict(boxstyle="round4", fc="0.8")arrow_args = dict(arrowstyle="<-")def getNumLeafs(myTree): numLeafs = 0 firstStr = myTree.keys()[0] secondDict = myTree[firstStr] for key in secondDict.keys(): if type(secondDict[key]).__name__=='dict':#test to see if the nodes are dictonaires, if not they are leaf nodes numLeafs += getNumLeafs(secondDict[key]) else: numLeafs +=1 return numLeafsdef getTreeDepth(myTree): maxDepth = 0 firstStr = myTree.keys()[0] secondDict = myTree[firstStr] for key in secondDict.keys(): if type(secondDict[key]).__name__=='dict':#test to see if the nodes are dictonaires, if not they are leaf nodes thisDepth = 1 + getTreeDepth(secondDict[key]) else: thisDepth = 1 if thisDepth > maxDepth: maxDepth = thisDepth return maxDepthdef plotNode(nodeTxt, centerPt, parentPt, nodeType): createPlot.ax1.annotate(nodeTxt, xy=parentPt, xycoords='axes fraction', xytext=centerPt, textcoords='axes fraction', va="center", ha="center", bbox=nodeType, arrowprops=arrow_args )def plotMidText(cntrPt, parentPt, txtString): xMid = (parentPt[0]-cntrPt[0])/2.0 + cntrPt[0] yMid = (parentPt[1]-cntrPt[1])/2.0 + cntrPt[1] createPlot.ax1.text(xMid, yMid, txtString, va="center", ha="center", rotation=30)def plotTree(myTree, parentPt, nodeTxt):#if the first key tells you what feat was split on numLeafs = getNumLeafs(myTree) #this determines the x width of this tree depth = getTreeDepth(myTree) firstStr = myTree.keys()[0] #the text label for this node should be this cntrPt = (plotTree.xOff + (1.0 + float(numLeafs))/2.0/plotTree.totalW, plotTree.yOff) plotMidText(cntrPt, parentPt, nodeTxt) plotNode(firstStr, cntrPt, parentPt, decisionNode) secondDict = myTree[firstStr] plotTree.yOff = plotTree.yOff - 1.0/plotTree.totalD for key in secondDict.keys(): if type(secondDict[key]).__name__=='dict':#test to see if the nodes are dictonaires, if not they are leaf nodes plotTree(secondDict[key],cntrPt,str(key)) #recursion else: #it's a leaf node print the leaf node plotTree.xOff = plotTree.xOff + 1.0/plotTree.totalW plotNode(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode) plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key)) plotTree.yOff = plotTree.yOff + 1.0/plotTree.totalD#if you do get a dictonary you know it's a tree, and the first element will be another dictdef createPlot(inTree): fig = plt.figure(1, facecolor='white') fig.clf() axprops = dict(xticks=[], yticks=[]) createPlot.ax1 = plt.subplot(111, frameon=False, **axprops) #no ticks #createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses plotTree.totalW = float(getNumLeafs(inTree)) plotTree.totalD = float(getTreeDepth(inTree)) plotTree.xOff = -0.5/plotTree.totalW; plotTree.yOff = 1.0; plotTree(inTree, (0.5,1.0), '') plt.show()#def createPlot():# fig = plt.figure(1, facecolor='white')# fig.clf()# createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses # plotNode('a decision node', (0.5, 0.1), (0.1, 0.5), decisionNode)# plotNode('a leaf node', (0.8, 0.1), (0.3, 0.8), leafNode)# plt.show()def retrieveTree(i): listOfTrees =[{'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}}, {'no surfacing': {0: 'no', 1: {'flippers': {0: {'head': {0: 'no', 1: 'yes'}}, 1: 'no'}}}} ] return listOfTrees[i]#createPlot(thisTree)
- 读书笔记:机器学习实战(2)——章3的决策树代码和个人理解与注释
- 读书笔记:机器学习实战(3)——章4的朴素贝叶斯分类代码和个人理解与注释
- 读书笔记:机器学习实战(5)——章6的支持向量机代码和个人理解与注释
- 读书笔记:机器学习实战(1)——章2的knn代码和个人改进与注释
- 读书笔记:机器学习实战(4)——章五的逻辑回归代码和个人理解
- 机器学习实战-第三章决策树-代码理解-读书笔记
- 代码注释:机器学习实战第3章 决策树
- 机器学习实战第三章——决策树,读书笔记
- 决策树,decision的pyton代码和注释(机器学习实战)
- 《机器学习实战》——决策树代码
- 机器学习实战python版第三章决策树代码理解
- 【读书笔记】机器学习实战-第三章 决策树
- 【读书笔记】机器学习实战-决策树(2)
- 《机器学习实战》读书笔记 第三章 决策树(part 3)
- 机器学习实战---读书笔记: 第3章 决策树
- 读书笔记:机器学习实战【第3章 决策树】
- 机器学习实战——决策树(读书笔记)
- Python《机器学习实战》读书笔记(三)——决策树
- OpenGL超级宝典(第五版)环境配置
- Uva - 1589 - Xiangqi
- 每日一得-servlet线程安全问题
- Xcode环境变量 build Settings 设置
- 项目报红叉,但是项目中却没有文件报错
- 读书笔记:机器学习实战(2)——章3的决策树代码和个人理解与注释
- js匹配表单name的值获取value
- 关于Android端配置极光推送
- Binary Tree Right Side View - LeetCode 199
- P225 4
- CSDN网站CODE配置记录
- opencv直方图创建CreateHist、计算cvCalcHist和访问的汇总
- 九、子窗口控件
- STM32的开发内核架构