scikit-learn使用OneHotEncoder处理Nominal属性的机器学习流程(Random Forest算法为例)

来源:互联网 发布:高压气瓶使用 知乎 编辑:程序博客网 时间:2024/06/06 18:11

在工作中进行机器学习的数据一般都包含Nominal属性和Numric属性,在scikit-learn中提供了处理numric方法像Normalization方法等,也提供了处理Nominal的方法(OneHotEncoder方法)。本文将展示OneHotEncoder方法处理Nominal数据,并将数据应用到机器学习的流程。

一、准备数据

本文使用的数据是csv格式的,数据中的属性有Numric型和Nominal型。属性描述如下

@attribute 'birthday' numeric
@attribute 'astrology' {'1','2','3','4','5','6','7','8','9','10','11','12'}
@attribute 'animalsign' {'0','1','2','3','4','5','6','7','8','9','10','11','12'}
@attribute 'height' numeric
@attribute 'degree' {'0','1','2','3','4','5','6','7','8'}
@attribute 'housing' {'0','1','2','3','4'}
@attribute 'marriage' {'0','1','2','3','4'}
@attribute 'income' {'0','1','2','3','4','5','6','7','8','9','10','11','12'}
@attribute 'haveChildren' {'1','2','3','4'}
@attribute 'hasMainPhoto' {'0','1'}
@attribute 'nationality' {'0','1','2','3','4','5','6','7','8','9','10','11','12'}
@attribute 'religion' {'0','1','2','3','4','5','6','7','8','9','10','11','12'}
@attribute 'bodyType' numeric
@attribute 'physicalLooking' numeric
@attribute 'newNature' {'0','1','2','3','4','5','6','7','8'}
@attribute 'industry' {'0','1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','20','21','22','23','24','25','26','27','28','29','30'}
@attribute 'newWorkStatus' {'0','1','2','3','4','5','6','7','8','9'}
@attribute 'newCar' {'0','1','2','3','4'}
@attribute 'isCreditedBySfz' {'0','1'}
@attribute 'cregisterTime' numeric
@attribute 'age' numeric
@attribute 'housestatus' {'0','1','2','3','4','5','6','7','8'}
@attribute 'photonum' numeric
@attribute 'msgcnt' numeric
@attribute 'himsgcnt' numeric
@attribute 'huifumsgcnt' numeric
@attribute 'receivemsg' numeric
@attribute 'viewcnt' numeric
@attribute 'beviewcnt' numeric
@attribute 'focuscnt' numeric
@attribute 'befocuscnt' numeric
@attribute 'class' {'0','1'}

数据如下所示,仅仅展示了几条数据

@data
1973,6,2,162,6,1,1,6,11,1,1,8613,1,1,3,8,2,23,0,0,1,113,42,1,3,0,2,0,5,0,27,0,0,1
1979,7,8,172,4,4,2,5,11,1,1,8651,3,6,7,6,7,1,7,0,1,113,36,4,2,0,1,8,20,28,98,0,0,1
1980,3,9,175,6,1,1,7,11,1,1,8637,1,1,7,?,0,24,0,0,0,113,35,1,0,0,0,1,3,1,20,0,1,0
1981,7,10,175,6,4,1,7,11,1,0,8623,1,1,4,8,0,5,0,2,1,113,34,4,0,0,0,0,0,0,0,0,0,0
1977,9,6,165,0,1,4,0,11,1,0,8632,1,1,4,7,7,7,8,0,1,113,38,1,0,0,0,0,9,0,20,0,2,0.

二、将数据读入内存,将训练集特征,训练集目标类,测试集特征,测试集目标类分别提取出来,以本文为例:

[python] view plain copy
  1. train_data = open("../../data/data/train.csv","r")  
  2. test_data = open("../../data/data/test.csv","r")  
  3. ##train data  
  4. train_feature=[]  
  5. train_target=[]  
  6. for line in train_data:  
  7.     temp = line.strip().split(',')  
  8.     train_feature.append(map(int,temp[0:-1]))  
  9.     train_target.extend(map(int,temp[-1]))  
  10. train_data.close()  
  11. ##test data  
  12. test_feature=[]  
  13. test_target=[]  
  14. for line in test_data:  
  15.     temp = line.strip().split(',')  
  16.     test_feature.append(map(int,temp[0:-1]))  
  17.     test_target.extend(map(int,temp[-1]))  
  18. test_data.close()  



三、使用OneHotEncoder将数据中的类别特征进行转化。以本文为例:

[python] view plain copy
  1. enc = OneHotEncoder(categorical_features=np.array([1,2,4,5,6,7,8,9,10,11,14,15,16,17,18,21]),n_values=[13,13,9,5,5,13,5,2,13,13,9,31,10,5,2,9])  

categorical_features代表类别属性的索引数值,n_values代表categorical_features中每个属性含有多少个类别。

注意:类别属性尽量处理为从0开始的整数,像(0,1,2,3,4,5),不可取的实例像(17,19,50,100,1000),这与OneHotEncoder处理类别属性采取的方式有关,此处不细讲了。

四、使用OneHotEncoder将训练特征和测试特征进行转化。以本文为例:

注意:此处一定将训练特征和测试特征一起转化,因为转化之后数组的维度将会发生变化,有一个不转化,就会出错。

[python] view plain copy
  1. enc.fit(train_feature)  
  2. train_feature = enc.transform(train_feature).toarray()  
  3. test_feature = enc.transform(test_feature).toarray()  

五、声明一个分类器,设置分类器参数,使用分类器进行训练,预测,评估等。

以下是以RandomForest为例源码:
[python] view plain copy
  1. from sklearn.preprocessing import OneHotEncoder  
  2. from sklearn.ensemble import RandomForestClassifier  
  3. from numpy import shape  
  4. import numpy as np  
  5. from sklearn.metrics.metrics import classification_report  
  6. from sklearn.metrics import confusion_matrix  
  7.   
  8.   
  9. train_data = open("../../data/data/train.csv","r")  
  10. test_data = open("../../data/data/test.csv","r")  
  11. ##train data  
  12. train_feature=[]  
  13. train_target=[]  
  14. for line in train_data:  
  15.     temp = line.strip().split(',')  
  16.     train_feature.append(map(int,temp[0:-1]))  
  17.     train_target.extend(map(int,temp[-1]))  
  18. train_data.close()  
  19. ##test data  
  20. test_feature=[]  
  21. test_target=[]  
  22. for line in test_data:  
  23.     temp = line.strip().split(',')  
  24.     test_feature.append(map(int,temp[0:-1]))  
  25.     test_target.extend(map(int,temp[-1]))  
  26. test_data.close()  
  27.   
  28. train_feature = np.array(train_feature)  
  29. test_feature = np.array(test_feature)  
  30.   
  31.   
  32. ##OneHotEncoder used  
  33. enc = OneHotEncoder(categorical_features=np.array([1,2,4,5,6,7,8,9,10,11,14,15,16,17,18,21]),n_values=[13,13,9,5,5,13,5,2,13,13,9,31,10,5,2,9])  
  34. enc.fit(train_feature)  
  35.   
  36. train_feature = enc.transform(train_feature).toarray()  
  37. test_feature = enc.transform(test_feature).toarray()  
  38. clf = RandomForestClassifier(n_estimators=10)  
  39. clf = clf.fit(train_feature,train_target)  
  40.   
  41. ##result  
  42. print clf.predict(test_feature)  
  43. target_names = ['losing''active']  
  44. print classification_report(test_target, clf.predict(test_feature),target_names=target_names)  

实验结果如下所示
              precision    recall  f1-score   support


     losing       0.85      0.91      0.88     31138
     active       0.84      0.75      0.79     19725


avg / total       0.85      0.85      0.84     50863

0 0
原创粉丝点击