机器学习-周志华-个人练习4.3

来源:互联网 发布:伟大的漫画 知乎 编辑:程序博客网 时间:2024/05/17 01:49

4.3 试编程实现基于信息熵进行划分选择的决策树算法,并为表4.3中的数据生成一颗决策树。


刚开始看到这一章时感觉这些内容都很好理解,算法也很直观,然而真到编程的时候发现有点编不下去,所以一直看到了集成学习这一章才发现真是到了不得不编的时候了,说实话,我感觉这一章的算法实现起来真的很麻烦。

(1)首先就是怎么分出连续属性和离散属性来计算信息熵,这个我只能先对数据进行预处理:让离散属性的取值转换为整数(1,2,3),而把全部连续属性放在离散属性最后面;

(2)然后,就是对所有属性进行遍历,计算相应的信息熵;

(3)接着选取最大信息熵对应的属性(在做到快完工时突然发现我的决策树的划分节点有一个和书上的不一样,就是触感那个节点,我当时的节点是连续属性里面的一个(具体是含糖率还是密度,我忘了),排查发现,理论上,这里选取这两个属性都可以达到最大信息增益(松了一口气,我还以为前面算法实现出错了),为了强行和书上一样,我加了一条属性选取规则,即离散属性优于连续属性(数据较少的时候连续属性的划分点不够精确,不如离散属性可靠,不知道这样强行解释有没有错。。。),并获取在此属性下当前数据集合中各个属性取值对应的数据子集;

(4)接下来,利用书上的递归算法,用字典的形式存储决策树的内容;

(5)最后,根据输出的决策树,绘图表示出来(要是看决策树只能看到一个字典,那真是太反人类了,然而这个时候发现我的水平貌似还不能用matplotlib轻易搞定,而且看网上大神们的答案,貌似matplotlib弄出来的也比较丑XD。于是我用了igraph来画这个决策树)。

下面附上相应代码和出来的图(虽然还是有不美观,不过已经能看了):

# -*- coding: utf-8 -*-# exercise 4.3: 基于信息熵的决策树算法import numpy as npimport igraph as igfrom collections import Counterdef all_class(D):  # 判断D中类的个数    return len(np.unique(D[:,-1]))  # np.unique用以生成具有不重复元素的数组def diff_in_attr(D,A_code):  # 判断D中样本在当前A_code上是否存在属性取值不同,即各属性只有一个取值    cond = False    for i in A_code[:-2]:  # continuous args are excluded        if len(np.unique(D[:,i])) != 1:            cond = True            break    return conddef major_class(D):  # 票选出D中样本数最多的类    c = Counter(D[:,-1]).most_common()[0][0]    return cdef max_gain(D,A_code):    # 对离散属性选取最大信息增益属性    N = len(D)  # 当前样本数    dict_class = {}  # 用字典保存类及其所在样本,键为类型,值为类型所含样本    for i in range(N):        if D[i,-1] not in dict_class.keys():  # 为字典添加新类            dict_class[D[i,-1]] = []            dict_class[D[i,-1]].append(int(D[i,0]))        else:            dict_class[D[i,-1]].append(int(D[i,0]))    Gain_D_A = {}  # 用字典保存在某一属性下的属性值及其所在样本,键为属性取值,值为对应样本    for a in A_code[:-2]:          # A中的离散属性,在后期的迭代中,属性a可能会变得不连续        dict_attr = {}          for i in range(N):            if D[i,a] not in dict_attr.keys():  # 为字典添加新的属性取值                dict_attr[D[i,a]] = []                dict_attr[D[i,a]].append(int(D[i,0]))            else:                dict_attr[D[i,a]].append(int(D[i,0]))        # 不用计算真实的Gain(D,a),因为所有Gain(D,a)的第一项都是Ent(D),所以直接计算第二项        # 第a个属性的Gain(D,a)用Gain_D_A[(a,)]表示,键用元组(a,)是为了后面直接用len(key)        # 判断是离散属性还是连续属性的key,并令初始Gain(D,a)值为0,        Gain_D_A[(a,)] = 0        for av,v in dict_attr.items():            m = len(v)  # m为当前属性取值的样本总数,如A_a0包含的样本总数            x2 = len(set(v) & set(dict_class[1.0]))  # 注意考虑x1或x2可能为0            x1 = m - x2            if x1:                Gain_D_A[(a,)] += x1 * np.log2(x1 / m)            if x2:                Gain_D_A[(a,)] += x2 * np.log2(x2 / m)    for a in A_code[-2:]:        # A中的连续属性density和sugar        cmp = {}  # 存放不同划分点对应的Ent        d_a =[D[i, a] for i in range(N)]        sort_d_a = sorted(d_a)        for t in range(N-1):            ls,mr,gain = [],[],0            divider = (sort_d_a[t] + sort_d_a[t+1])/2            for i in range(N):                if D[i,a] < divider:                    ls.append(int(D[i,0]))                else:                    mr.append(int(D[i,0]))            less,more = len(ls),len(mr)            less0 = len(set(ls) & set(dict_class[0]))            more0 = len(set(mr) & set(dict_class[0]))            less1,more1 = less-less0,more-more0            for p in [less0, less1]:                if p:                    gain += p * np.log2(p / less)            for p in [more0, more1]:                if p:                    gain += p * np.log2(p / more)            cmp[t] = gain        best_t = int(sorted(cmp, key=lambda x: cmp[x], reverse=True)[0])        threshold = (sort_d_a[best_t] + sort_d_a[best_t+1])/2        Gain_D_A[(a, threshold)] = cmp[best_t]    Gain_D_A_list = sorted(Gain_D_A.items(),key=lambda a:(a[1],-len(a[0])),reverse=True)    # 如果多个属性同时达到最大信息熵,则优先选取离散属性,即离散属性排在连续属性之前    best = Gain_D_A_list[0][0]    if len(best) == 2:  # 最优属性为连续属性        a, threshold = best        low = [int(D[i,0]) for i in range(N) if D[i, a] <= threshold]        high = [int(D[i,0]) for i in range(N) if D[i, a] > threshold]        # 返回对应的属性序号(绝对序号),并返回此属性下对应的取值和所包含的实例的id        return a, {'<=%.4f' % threshold: low, '>%.4f' % threshold: high}    else:  # 最优属性为离散属性        dict_attr = {}        best = int(best[0])        for i in range(N):            if D[i, best] not in dict_attr.keys():                dict_attr[D[i, best]] = []                dict_attr[D[i, best]].append(int(D[i,0]))            else:                dict_attr[D[i, best]].append(int(D[i,0]))        # 返回对应的属性序号(绝对序号),并返回此属性下对应的取值和所包含的实例的id        return best, dict_attrdef Tree_Generate(D,A_code,full_D):    if all_class(D) == 1:  # case1        return D[0,-1]    if (len(A_code)==2) or (not diff_in_attr(D,A_code)):  # case2        return str(major_class(D))    a, di = max_gain(D,A_code)    tree = {A[a]:{}}    new_A_code = A_code[:]    if a not in A_code[-2:]:        all_a = np.unique(full_D[:, a])        new_A_code.remove(a)        for item in all_a:            if item not in di.keys():                di[item] = []    for av, Dv in di.items():        if Dv:            tree[A[a]][av] = Tree_Generate(full_D[Dv, :], new_A_code,full_D)        else:  # case3            tree[A[a]][av] = 'empty: %s' % major_class(D)    return treeD = np.array([    [0, 1, 1, 1, 1, 1, 1, 0.697, 0.460, 1],    [1, 2, 1, 2, 1, 1, 1, 0.774, 0.376, 1],    [2, 2, 1, 1, 1, 1, 1, 0.634, 0.264, 1],    [3, 1, 1, 2, 1, 1, 1, 0.608, 0.318, 1],    [4, 3, 1, 1, 1, 1, 1, 0.556, 0.215, 1],    [5, 1, 2, 1, 1, 2, 2, 0.403, 0.237, 1],    [6, 2, 2, 1, 2, 2, 2, 0.481, 0.149, 1],    [7, 2, 2, 1, 1, 2, 1, 0.437, 0.211, 1],    [8, 2, 2, 2, 2, 2, 1, 0.666, 0.091, 0],    [9, 1, 3, 3, 1, 3, 2, 0.243, 0.267, 0],    [10, 3, 3, 3, 3, 3, 1, 0.245, 0.057, 0],    [11, 3, 1, 1, 3, 3, 2, 0.343, 0.099, 0],    [12, 1, 2, 1, 2, 1, 1, 0.639, 0.161, 0],    [13, 3, 2, 2, 2, 1, 1, 0.657, 0.198, 0],    [14, 2, 2, 1, 1, 2, 2, 0.360, 0.370, 0],    [15, 3, 1, 1, 3, 3, 1, 0.593, 0.042, 0],    [16, 1, 1, 2, 2, 2, 1, 0.719, 0.103, 0]])A = {0:'id',1:'color',2:'root',3:'sound',4:'texture',     5:'heart',6:'touch',7:'density',8:'suger',9:'label'}A_code = list(range(1,len(A)-1))  # A_code = [1, 2, 3, 4, 5, 6, 7, 8]tree = Tree_Generate(D,A_code,D)print(tree)def Tree_draw(tree_item, g, node=0):    attr, cond = tree_item    u_attr = '%s_%s' % (attr, node)  # 保证各节点name的独特性,在后面用来查找此节点    g.add_vertex(u_attr)    new_node = g.vs.find(name=u_attr).index    g.vs[new_node]['label'] = str(attr)    g.add_edge(node, new_node)    if type(cond).__name__ == 'dict':        for item in list(cond.items()):            Tree_draw(item, g, new_node)    else:        u_cond = '%s_%s' % (attr, cond)        g.add_vertex(u_cond)        end_node = g.vs.find(name=u_cond).index        g.vs[end_node]['label'] = str(int(cond))        g.add_edge(new_node, end_node)    return ginit_items = list(tree.items())[0]g = ig.Graph()g.add_vertex('Source')g.vs[0]['label'] = 'DATA's = Tree_draw(init_items, g, node=0)def shaper(x):    if x == 2:        return 'hidden'    else:        return 'circle'shape = list(map(shaper, s.vs.degree()))label_size_map = {'hidden':18, 'circle':24}label_size = [label_size_map[i] for i in shape]edge_width = [2]*(len(shape)-1)edge_width[0] = 4my_lay = g.layout_reingold_tilford(root=[0])style = {"vertex_size":26, "vertex_shape":shape, "vertex_color":'pink',         "vertex_label_size":label_size, "edge_width": edge_width,         "layout":my_lay, "bbox":(600,400), "margin":35}ig.plot(s,'tree.png', **style)



输出决策树的字典形式如下:

{'texture': {1.0: {'density': {'<=0.3815': 0.0, '>0.3815': 1.0}}, 2.0: {'touch': {1.0: 0.0, 2.0: 1.0}}, 3.0: 0.0}}

成图如下:



1 0
原创粉丝点击