C++单刷《机器学习实战》之二——决策树

来源:互联网 发布:叉叉助手获取数据失败 编辑:程序博客网 时间:2024/06/11 11:13

算法概述:决策树是用于分类的一种常用方法,根据数据集特征值的不同,构造决策树来将数据集不断分成子数据集,直至决策树下的每个分支都是同一类或用完所有的特征值。

决策树的一般流程:

(1)收集数据

(2)准备数据:树构造算法只适用于标称型数据,因此数值型数据必须离散化,最好转为bool类型。

(3)分析数据:寻找能够最好地划分数据集的特征。

(4)训练:构造树的数据结构。

(5)测试:使用决策树计算错误率。

(6)使用;将训练好的决策树用于分类。

 

1.获取数据

对训练集的要求:需要了解数据的特征数量,每一个特征所对应的特征值,每一个数据的标签。

C++中我们不再通过矩阵来存取数据集,通过以下结构体来描述每一个数据,并将数据存入vector中:

struct data{int featNum;                    //特征数量vector<bool> features;          //特征值string label;                   //标签};vector<data> dataset;               //数据集


书中给出了一个海洋生物数据的表格:


参考该表格创建数据集:

data d1;d1.featNum = 2; d1.features.push_back(true);d1.features.push_back(true);d1.label = "yes";dataset.push_back(d1);

2.信息增益

决策树通过数据的某一项特征来将数据集分类,有的特征值可以较好的对数据分类,有些则不行。例如为了将猫和狗区分开来,选取是否会游泳,是否会爬树这些特征则分类效果较好,若选择是否四肢着地奔跑这个特征则分类效果会很差。构造决策树的第一步就是要选择能使得分类效果最好的特征进行第一步分类,在分好的子数据集中再选择次好的特征值进一步分类,直至决策树下每一分支都为同一类或用完所有的特征值。

下图为书中一个决策树的流程图,用于对邮件进行分类:

如何判断一个特征值用于分类的效果好坏,我们需要了解信息增益的概念,并简单介绍一下香农熵,不过多讨论信息论的内容。

香农熵用于描述信息的无序程度,举个栗子,箱子A和箱子B里各放有10个小球,箱子A里是5个红球和5个白球,箱子B里有9个红球和1个白球,则从A中任取一球其为红球和白球的概率均为50%,从B中任取一球其为红球和白球的概率分别为90%10%,即从B中取球的过程,我们有90%的概率可以确定其为红球,该过程获取的信息相比较而言具有更大的确定性,则香农熵较小,若B中全是红球,则从B中取出红球的概率为100%,此时信息具有最大的确定性,香农熵为0

我们分类的最终目的,就是希望在数据集中取出的任一样本,能以较大概率判断其属于某一类,使香农熵达到最小。分类之前和之后数据的香农熵之差称为信息增益。

如果待分类的数据可能划分在多个分类之中,则符号xi的信息定义为


其中p(xi)是选择该分类的概率。

为了计算熵,我们需要计算所有类别所有可能值所包含的信息,通过以下公式得到:



 

其中n是分类的数目,H是最终计算出的熵。有兴趣的同学可以动手算算例子中从A中取球和从B中取球的过程所对应的香农熵。



计算香农熵的代码实现:

double calcShannonEnt(vector<data> myData)// 该函数返回特定数据集的香农熵// myData:要计算香农熵的数据集{size_t numEntries = myData.size();map<string,size_t> labelCounts;for (auto it = myData.begin(); it != myData.end();it++){string currentLabel = it->label;if (labelCounts.count(currentLabel) == 0){labelCounts[currentLabel] = 0;}labelCounts[currentLabel] += 1;}double shannonEnt = 0.0;for (auto it_map = labelCounts.begin(); it_map != labelCounts.end(); it_map++){double prob = (double)(it_map->second) / (double)numEntries;shannonEnt -= prob * log2(prob);}return shannonEnt;}

计算海洋生物数据集的香农熵:

createDataset();                              //创建数据集cout << calcShannonEnt(dataset) << endl;      //计算数据集的香农熵

运行结果:



3.划分数据集

   

先写个按照给定特征划分数据集的函数:

vector<data> splitDataSet(vector<data> myData, int axis, bool value)//按照给定特征和特征值划分数据集//myData:数据集 axis:给定特征 value:该特征的特征值{vector<data> retDataSet;for (auto it = myData.begin(); it != myData.end(); it++){auto it_feat = it->features.begin();auto it_axis = it_feat + axis;data d;d.featNum = it->featNum - 1;if (*(it_axis) == value){for (; it_feat != it->features.end(); it_feat++){if (it_feat == it_axis)continue;bool temp = *(it_feat);d.features.push_back(temp);}d.label = it->label;retDataSet.push_back(d);}}return retDataSet;}

测试该函数:

int main(){createDataset();                              //创建数据集cout << "原始数据集:" << endl;outputData(dataset);                          //原始数据集int axis = 0;bool value = true;cout << endl;cout << "选取第" << axis + 1 << "个特征," << "特征值为:" << value << endl;cout << endl;vector<data> retData = splitDataSet(dataset, axis, value);   //找出第一个特征中特征值为i 的数据集cout << "划分后的子数据集:" << endl;outputData(retData);                          //输出return 0;}                       //输出

运行结果:



从运行结果可以看到,前3条数据满足第一个特征值为1这个条件,划分出了相应的子集。

 

用特征对数据集进行分类后会使得香农熵减小,获得一定的信息增益,找出使得信息增益最大的分类方法进行分类。代码清单如下:

size_t chooseBestFeatureToSplit(vector<data> myData)//找出最好的数据集划分方式,即找出最合适的特征用于分类//myData:数据集{double baseEntropy = calcShannonEnt(myData);               //计算原始数据集的香农熵double bestInfoGain = 0.0;                                 //信息增益的最大值size_t bestFeature = -1;                                   //最“好”的特征//auto it_feat = myData.begin()->features.begin();for (int i = 0; i < myData.begin()->featNum;i++){auto it = myData.begin();set<bool> featSet;for (; it != myData.end(); it++){featSet.insert(it->features[i]);}double newEntory = 0;for (auto it_feat = featSet.begin(); it_feat != featSet.end(); it_feat++)  //计算每种划分方式的香农熵{vector<data> subDataSet = splitDataSet(dataset, i, *(it_feat));double prob = (double)subDataSet.size() / (double)dataset.size();newEntory += prob*calcShannonEnt(subDataSet);}double infoGain = baseEntropy - newEntory;if (infoGain > bestInfoGain)                              //计算最好的信息增益{bestInfoGain = infoGain;bestFeature = i;}}return bestFeature;                                         //返回最好的特征}

该函数用于寻找使得分类后信息增益最大的特征。

测试该函数:

int main(){createDataset();                              //创建数据集cout << "原始数据集:" << endl;cout << "不浮出水面可以生存" << '\t' << "是否有脚蹼" << '\t' << "属于鱼类" << endl;outputData(dataset);                          //原始数据集cout << endl;cout << "最好的特征为第 " << chooseBestFeatureToSplit(dataset) +1 <<" 个特征"<< endl;return 0;}

运行结果:



4.构建决策树

构建决策树的过程,就是选取特征来划分数据集,直到划分出所有的类别或用完所有的特征属性。

当所有的特征属性都用完后,可能划分出来的数据集里面仍然有不同的类别,此时通过多数表决的方式来决定该数据集的分类。

string majorityCnt(vector<data> myData)//若叶子节点下有多个类别,采用多数表决的方式决定该叶子节点的分类{string result;map<string, size_t> labelCounts;for (auto it = myData.begin(); it != myData.end(); it++){string currentLabel = it->label;if (labelCounts.count(currentLabel) == 0){labelCounts[currentLabel] = 0;}labelCounts[currentLabel] += 1;}auto it = labelCounts.begin();result = it->first;size_t num = it->second;for (; it != labelCounts.end(); it++){if (it->second > num){num = it->second;result = it->first;}}return result;}

接下来构建决策树:

node* createTree(vector<data> myData)//构造决策树{node* root = new node();auto it = myData.begin();set<string> labels_set;for (; it != myData.end(); it++)                   {labels_set.insert(it->label);}it = myData.begin();if (myData.size() == 1 || labels_set.size() == 1)     //若数据集只有一项或者只有一类,则返回该分类{string text = it->label;root->label = text;return root;}if (it->featNum == 0)                  //若数据集下特征数量为0,则返回出现次数最多的分类{root->label = majorityCnt(myData);return root;}size_t best_feat = chooseBestFeatureToSplit(myData);              //选择最好的特征进行分类root->feature = best_feat;vector<data> left_data = splitDataSet(myData, best_feat, false);  //将数据集按特征分为两类vector<data> right_data = splitDataSet(myData, best_feat, true);   root->left = createTree(left_data);                               //创建左子树和右子树root->right = createTree(right_data);return root;}

5.执行分类

决策树构建完毕后,利用该决策树执行分类:

string classify(node* tree, data input)//利用构建好的决策树执行分类{string result;if (tree->label != "")                    //判断是否为叶节点, 找到叶节点,返回结果{result = tree->label;}else{node* sub_tree = new node;           //不是叶节点,执行递归遍历size_t best_feat = tree->feature;    //在该节点执行分类用到的特征bool feat_val = input.features[best_feat];    if (!feat_val)                       //若特征值为false,转到左子树,若为true,转到右子树,递归搜索{sub_tree = tree->left;}else{sub_tree = tree->right;}input.featNum -= 1;size_t index = 0;for (auto it = input.features.begin(); it != input.features.end(); it++)   //去掉用过的特征值{if (index == best_feat){input.features.erase(it);break;}index++;}result = classify(sub_tree, input);}return result;}

主函数测试:

int main(){createDataset();                              //创建数据集cout << "原始数据集:" << endl;cout << "不浮出水面可以生存" << '\t' << "是否有脚蹼" << '\t' << "属于鱼类" << endl;outputData(dataset);                          //原始数据集node* tree = createTree(dataset);data d;bool val;cout << "输入特征值:" << endl;for (int i = 0; i < 2; i++){cin >> val;d.features.push_back(val);}string result = classify(tree, d);cout << result << endl;}

测试结果:




完整代码:

#include <iostream>#include <cmath>#include<map>#include<string>#include<sstream>#include<fstream>#include<vector>#include<set>#include<algorithm>using namespace std;struct data{int featNum;                    //特征数量vector<bool> features;          //特征值string label;                   //标签};vector<data> dataset;               //数据集struct node{size_t feature;node* left;node* right;string label;//node();};void createDataset(){//创建数据集data d1;d1.featNum = 2; d1.features.push_back(true);d1.features.push_back(true);d1.label = "yes";dataset.push_back(d1);data d2;d2.featNum = 2;d2.features.push_back(true);d2.features.push_back(true);d2.label = "yes";dataset.push_back(d2);data d3;d3.featNum = 2;d3.features.push_back(true);d3.features.push_back(false);d3.label = "no";dataset.push_back(d3);data d4;d4.featNum = 2;d4.features.push_back(false);d4.features.push_back(true);d4.label = "no";dataset.push_back(d4);data d5;d5.featNum = 2;d5.features.push_back(false);d5.features.push_back(true);d5.label = "no";dataset.push_back(d5);}//bool dataSet[5][3] = { { true, true, true }, { true, true, true }, { true, false, false }, { false, true, false },{false, true, false } };//string labels[2] = { "no surfacing", "flippers" };double calcShannonEnt(vector<data> myData)// 该函数返回特定数据集的香农熵// myData:要计算香农熵的数据集{size_t numEntries = myData.size();map<string,size_t> labelCounts;for (auto it = myData.begin(); it != myData.end();it++){string currentLabel = it->label;if (labelCounts.count(currentLabel) == 0){labelCounts[currentLabel] = 0;}labelCounts[currentLabel] += 1;}double shannonEnt = 0.0;for (auto it_map = labelCounts.begin(); it_map != labelCounts.end(); it_map++){double prob = (double)(it_map->second) / (double)numEntries;shannonEnt -= prob * log2(prob);}return shannonEnt;}vector<data> splitDataSet(vector<data> myData, int axis, bool value)//按照给定特征和特征值划分数据集//myData:数据集 axis:给定特征的索引 value:该特征的特征值{vector<data> retDataSet;                                 //划分的子数据集for (auto it = myData.begin(); it != myData.end(); it++){auto it_feat = it->features.begin();auto it_axis = it_feat + axis;data d;d.featNum = it->featNum - 1;if (*(it_axis) == value){for (; it_feat != it->features.end(); it_feat++){if (it_feat == it_axis)                    continue;bool temp = *(it_feat);d.features.push_back(temp);}d.label = it->label;retDataSet.push_back(d);}}return retDataSet;}size_t chooseBestFeatureToSplit(vector<data> myData)//找出最好的数据集划分方式,即找出最合适的特征用于分类//myData:数据集{double baseEntropy = calcShannonEnt(myData);               //计算原始数据集的香农熵double bestInfoGain = 0.0;                                 //信息增益的最大值size_t bestFeature = -1;                                   //最“好”的特征//auto it_feat = myData.begin()->features.begin();for (int i = 0; i < myData.begin()->featNum;i++){auto it = myData.begin();set<bool> featSet;for (; it != myData.end(); it++){featSet.insert(it->features[i]);}double newEntory = 0;for (auto it_feat = featSet.begin(); it_feat != featSet.end(); it_feat++)  //计算每种划分方式的香农熵{vector<data> subDataSet = splitDataSet(dataset, i, *(it_feat));double prob = (double)subDataSet.size() / (double)dataset.size();newEntory += prob*calcShannonEnt(subDataSet);}double infoGain = baseEntropy - newEntory;if (infoGain > bestInfoGain)                              //计算最好的信息增益{bestInfoGain = infoGain;bestFeature = i;}}return bestFeature;                                         //返回最好的特征}void outputData(vector<data> myData){for (auto it = myData.begin(); it != myData.end(); it++){auto it_feat = it->features.begin();for (; it_feat != it->features.end(); it_feat++){cout << *(it_feat) << '\t';}cout << it->label << endl;}}string majorityCnt(vector<data> myData)//若叶子节点下有多个类别,采用多数表决的方式决定该叶子节点的分类{string result;map<string, size_t> labelCounts;for (auto it = myData.begin(); it != myData.end(); it++){string currentLabel = it->label;if (labelCounts.count(currentLabel) == 0){labelCounts[currentLabel] = 0;}labelCounts[currentLabel] += 1;}auto it = labelCounts.begin();result = it->first;size_t num = it->second;for (; it != labelCounts.end(); it++){if (it->second > num){num = it->second;result = it->first;}}return result;}node* createTree(vector<data> myData)//构造决策树{node* root = new node();auto it = myData.begin();set<string> labels_set;for (; it != myData.end(); it++)                   {labels_set.insert(it->label);}it = myData.begin();if (myData.size() == 1 || labels_set.size() == 1)     //若数据集只有一项或者只有一类,则返回该分类{string text = it->label;root->label = text;return root;}if (it->featNum == 0)                  //若数据集下特征数量为0,则返回出现次数最多的分类{root->label = majorityCnt(myData);return root;}size_t best_feat = chooseBestFeatureToSplit(myData);              //选择最好的特征进行分类root->feature = best_feat;vector<data> left_data = splitDataSet(myData, best_feat, false);  //将数据集按特征分为两类vector<data> right_data = splitDataSet(myData, best_feat, true);   root->left = createTree(left_data);                               //创建左子树和右子树root->right = createTree(right_data);return root;}string classify(node* tree, data input)//利用构建好的决策树执行分类{string result;if (tree->label != "")                    //判断是否为叶节点, 找到叶节点,返回结果{result = tree->label;}else{node* sub_tree = new node;           //不是叶节点,执行递归遍历size_t best_feat = tree->feature;    //在该节点执行分类用到的特征bool feat_val = input.features[best_feat];    if (!feat_val)                       //若特征值为false,转到左子树,若为true,转到右子树,递归搜索{sub_tree = tree->left;}else{sub_tree = tree->right;}input.featNum -= 1;size_t index = 0;for (auto it = input.features.begin(); it != input.features.end(); it++)   //去掉用过的特征值{if (index == best_feat){input.features.erase(it);break;}index++;}result = classify(sub_tree, input);}return result;}int main(){createDataset();                              //创建数据集cout << "原始数据集:" << endl;cout << "不浮出水面可以生存" << '\t' << "是否有脚蹼" << '\t' << "属于鱼类" << endl;outputData(dataset);                          //原始数据集node* tree = createTree(dataset);data d;bool val;cout << "输入特征值:" << endl;for (int i = 0; i < 2; i++){cin >> val;d.features.push_back(val);}string result = classify(tree, d);cout << "属于鱼类:" << result << endl;}//createDataset();                              //创建数据集//cout << "原始数据集:" << endl;//cout << "不浮出水面可以生存" << '\t' << "是否有脚蹼" << '\t' << "属于鱼类" << endl;//outputData(dataset);                          //原始数据集//cout << endl;//cout << "最好的划分特征为第 " << chooseBestFeatureToSplit(dataset) +1 <<" 个特征"<< endl;//int axis = 0;//bool value = true;//createDataset();                              //创建数据集//cout << "原始数据集:" << endl;//cout << "不浮出水面可以生存" << '\t' << "是否有脚蹼" << '\t' << "属于鱼类" << endl;//outputData(dataset);                          //原始数据集//cout << "选取第" << axis + 1 << "个特征," << "特征值为:" << value << endl;//cout << endl;//vector<data> retData = splitDataSet(dataset, axis, value);   //找出第一个特征中特征值为i 的数据集//cout << "划分后的子数据集:" << endl;//cout << "是否有脚蹼" << '\t' << "属于鱼类" << endl;//outputData(retData);                          //输出//auto it = retData.begin();//cout << "特征数量" << '\t'<<it->featNum<<endl;//vector<data> retData2 = splitDataSet(retData, axis, value);//outputData(retData2);//auto it2 = retData2.begin();//cout << "特征数量" << '\t' << it2->featNum << endl;//cout << calcShannonEnt(dataset) << endl;      //计算数据集的香农熵//cout << chooseBestFeatureToSplit(dataset) << endl;//vector<data> retData = splitDataSet(dataset, 0, 0);//outputData(retData);//cout << calcShannonEnt(dataset) << endl;




0 0
原创粉丝点击