实战xgboost与sklearn与pandas训练模型
来源:互联网 发布:越狱后必装软件源 编辑:程序博客网 时间:2024/06/07 10:07
import cPickleimport xgboost as xgbimport numpy as npfrom sklearn.model_selection import KFold, train_test_split, GridSearchCVfrom sklearn.metrics import confusion_matrix, mean_squared_errorfrom sklearn.datasets import load_iris, load_digits, load_boston#用Xgboost建模,用sklearn做评估#二分类问题,用混淆矩阵digits = load_digits()y = digits['target']X = digits['data']X.shape(1797, 64)y.shape#K折的切分器kf = KFold(n_splits=2, shuffle=True, random_state=1234)for train_index, test_index in kf.split(X): xgboost_model = xgb.XGBClassifier().fit(X[train_index], y[train_index]) #预测结果 pred = xgboost_model.predict(X[test_index]) #标准答案 ground_truth = y[test_index] print(confusion_matrix(ground_truth, pred))[[78 0 0 0 0 0 0 0 1 0] [ 0 92 1 0 0 0 0 0 0 0] [ 0 2 82 0 0 0 2 0 0 0] [ 0 1 1 88 0 0 0 1 0 3] [ 2 0 0 0 99 0 2 3 1 0] [ 0 0 0 1 0 95 2 0 0 4] [ 0 2 0 0 0 0 84 0 2 0] [ 0 0 0 0 0 0 0 86 0 2] [ 0 6 0 2 0 0 0 0 73 1] [ 0 1 0 0 1 0 0 5 2 71]][[98 0 0 0 0 0 0 1 0 0] [ 0 84 2 1 0 0 1 0 0 1] [ 1 0 88 0 0 0 0 1 1 0] [ 0 0 1 86 0 1 0 0 0 1] [ 0 0 0 0 74 0 0 0 0 0] [ 1 0 0 0 1 73 0 0 1 4] [ 0 0 0 0 1 1 91 0 0 0] [ 0 0 0 1 0 0 0 89 1 0] [ 1 1 0 0 1 1 0 0 87 1] [ 0 2 0 1 0 1 0 0 2 94]]#多分类iris = load_iris()y_iris = iris['target']X_iris = iris['data']kf = KFold(n_splits=2, shuffle=True, random_state=1234)for train_index, test_index in kf.split(X_iris): xgboost_model = xgb.XGBClassifier().fit(X_iris[train_index], y_iris[train_index]) #预测结果 pred = xgboost_model.predict(X_iris[test_index]) #标准答案 ground_truth = y_iris[test_index] print(confusion_matrix(ground_truth, pred))#回归问题boston = load_boston()y_boston = boston['target']X_boston = boston['data']kf = KFold(n_splits=2, shuffle=True, random_state=1234)for train_index, test_index in kf.split(X_boston): xgboost_model = xgb.XGBRegressor().fit(X_boston[train_index], y_boston[train_index]) #预测结果 pred = xgboost_model.predict(X_boston[test_index]) #标准答案 ground_truth = y_boston[test_index] print(mean_squared_error(ground_truth, pred))优化超参数(参数选择)boston = load_boston()y_boston = boston['target']X_boston = boston['data']xgb_model = xgb.XGBRegressor()#参数字典param_dict = {'max_depth':[2,4,6], 'n_estimators':[50, 100, 200]}rgs = GridSearchCV(xgb_model, param_dict)rgs.fit(X_boston, y_boston)print(rgs.best_score_)print(rgs.best_params_)
阅读全文
0 0
- 实战xgboost与sklearn与pandas训练模型
- 机器学习-训练模型的保存与恢复(sklearn)
- sklearn训练后使用pickle、joblib保存与恢复模型
- 机器学习-训练模型的保存与恢复(sklearn)
- xgboost算法原理与实战
- XGBoost实战与调优
- 训练模型的保存与恢复(sklearn模型持久化)
- xgboost入门与实战(实战调参篇)
- xgboost入门与实战(实战调参篇)
- xgboost入门与实战(实战调参篇)
- xgboost入门与实战(实战调参篇)
- xgboost入门与实战(原理篇)
- xgboost入门与实战(原理篇)
- xgboost入门与实战(原理篇)
- xgboost调用sklearn的交叉验证,并且使用自定义的训练集、验证集进行模型的调参
- sklearn之模型选择与评估
- sklearn模型的保存与恢复
- sklearn、TensorFlow、keras模型保存与读取
- ubuntu linux for wps 在dpkg安装后,由于缺依赖被移除
- Oracle--ogg(goldengate)
- Cg Programming/Unity/Cookies
- 10,数据挖掘环境搭建-mysql安装
- 编程计算1到n之间的所有数的平方和立方(n由键盘输入)
- 实战xgboost与sklearn与pandas训练模型
- Oracle误操作--数据被Commit后的数据回退恢复(闪回)
- Gulp入门:从安装到编译Sass
- 笨方法学习Python-习题41: 来自 Percal 25 号行星的哥顿人(Gothons)
- webservice的WSDL文档详解
- 今天看了知乎上的一些文章得到的感想
- java以一个空格或者多个空格进行字符串的分割
- C# 字符转换知识点整理
- 用maven创建web