LDA and QDA 分类:机器学习 Sklearn
来源:互联网 发布:全站仪数据.dat下载 编辑:程序博客网 时间:2024/06/05 06:30
简单的R实现:
library(MASS)
Iris <- data.frame(rbind(iris3[,,1], iris3[,,2], iris3[,,3]), Sp = rep(c(“s”, “c”, “v”), rep(50, 3)))
y_hat <- predict(lda(Sp~.,Iris, prior = c(1, 1, 1)/3),Iris)classsum(yhat==Iris Sp)/150
Iris <- data.frame(rbind(iris3[,,1], iris3[,,2], iris3[,,3]), Sp = rep(c(“s”, “c”, “v”), rep(50, 3)))
y_hat <- predict(lda(Sp~.,Iris, prior = c(1, 1, 1)/3),Iris)
一般numpy中对于ndarray的处理对python内置list也是可以作用的。(构造函数实现了转型)
下面对sklearn中线性判别及二次判别进行了调用,并与DIY二次判别进行比较,
这里使用了python字符串执行及实例化函数exec及eval提高了DIY函数的可重用性:
下面对sklearn中线性判别及二次判别进行了调用,并与DIY二次判别进行比较,
这里使用了python字符串执行及实例化函数exec及eval提高了DIY函数的可重用性:
- from __future__ import division
- import numpy as np
- from sklearn.datasets import load_iris
- from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
- from functools import partial
- iris = load_iris()
- data, target = iris.data, iris.target
- X, y = data, target
- clf = LinearDiscriminantAnalysis()
- clf.fit(X, y)
- y_hat = clf.predict(X)
- print “the sklearn lda accuracy :”
- print np.sum(y_hat == y) / y.shape[0]
- clf = QuadraticDiscriminantAnalysis()
- clf.fit(X, y)
- y_hat = clf.predict(X)
- print “the sklearn lda accuracy :”
- print np.sum(y_hat == y) / y.shape[0]
- def discrtminant_func(x ,X, prior_probability):
- cov_matrix = np.cov(X.T)
- mean_vector = np.mean(X, axis = 0)
- return np.dot(np.dot((x - mean_vector).T , np.linalg.inv(cov_matrix)), x - mean_vector) * (-0.5) + \
- np.log(2 * np.pi) * (-0.5 * X.shape[0]) + np.log(np.linalg.det(cov_matrix)) * (-0.5) + np.log(prior_probability)
- factors = np.unique(y)
- prior_probability = 1 / len(factors)
- for i in range(len(factors)):
- exec(“X” + str(factors[i]) + “= X[y == ” + str(factors[i]) + “,]”)
- exec(“discrtminant_func” + str(factors[i]) + “=partial(discrtminant_func, X = X” + str(factors[i]) + “, prior_probability = ” + str(prior_probability) + “)”)
- def y_predict_qda(x):
- return np.argmax([eval(“discrtminant_func” + str(factor) + “(x)”) for factor in factors])
- print “the diy qda accuracy :”
- print np.sum(np.array(map(y_predict_qda, X)) == y) / y.shape[0]
from __future__ import divisionimport numpy as np from sklearn.datasets import load_iris from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysisfrom functools import partialiris = load_iris()data, target = iris.data, iris.targetX, y = data, targetclf = LinearDiscriminantAnalysis()clf.fit(X, y)y_hat = clf.predict(X)print "the sklearn lda accuracy :"print np.sum(y_hat == y) / y.shape[0]clf = QuadraticDiscriminantAnalysis()clf.fit(X, y)y_hat = clf.predict(X)print "the sklearn lda accuracy :"print np.sum(y_hat == y) / y.shape[0]def discrtminant_func(x ,X, prior_probability): cov_matrix = np.cov(X.T) mean_vector = np.mean(X, axis = 0) return np.dot(np.dot((x - mean_vector).T , np.linalg.inv(cov_matrix)), x - mean_vector) * (-0.5) + \ np.log(2 * np.pi) * (-0.5 * X.shape[0]) + np.log(np.linalg.det(cov_matrix)) * (-0.5) + np.log(prior_probability)factors = np.unique(y)prior_probability = 1 / len(factors)for i in range(len(factors)): exec("X" + str(factors[i]) + "= X[y == " + str(factors[i]) + ",]") exec("discrtminant_func" + str(factors[i]) + "=partial(discrtminant_func, X = X" + str(factors[i]) + ", prior_probability = " + str(prior_probability) + ")")def y_predict_qda(x): return np.argmax([eval("discrtminant_func" + str(factor) + "(x)") for factor in factors])print "the diy qda accuracy :"print np.sum(np.array(map(y_predict_qda, X)) == y) / y.shape[0]
与分类相对应的是降维,其仅仅是组间与组内方差阵的相对特征值。
更多了解请浏览:http://blog.csdn.net/sinat_30665603
阅读全文
0 0
- LDA and QDA 分类:机器学习 Sklearn
- 机器学习-QDA&LDA
- LDA and QDA
- LDA and QDA
- 分类——LDA、QDA
- Sklearn,xgboost机器学习多分类实验
- 机器学习决策树:sklearn分类和回归
- 【机器学习 sklearn】XGBoost and RandomForest
- Python机器学习库SKLearn分类算法之朴素贝叶斯
- scikit-learn 回归基础 分类:机器学习Sklearn
- scikit-learn svm初探 分类:机器学习 Sklearn
- sklearn 的基本机器学习(分类方法)
- centos7安装python机器学习相关环境numpy,scipy,sklearn,lda
- [机器学习] LDA理论
- 机器学习:LDA
- 机器学习----LDA、PCA
- 【机器学习sklearn】pickling
- 机器学习sklearn knn
- 谷歌I/O开发者大会宣布Kotlin成为安卓开发一级语言
- struts2的国际化(即实现网站整体中英文切换)实例
- 01背包1005
- Android中获取系统所认为的最小滑动距离TouchSlop
- 列出真分数序列
- LDA and QDA 分类:机器学习 Sklearn
- Struts2_提供的校验器列表
- POJ2992
- 【Effective C++】读书笔记 条款52:写了placement new 也要写placement delete
- Educational Codeforces Round 21 -DEG
- cookie 的简单使用
- Vuejs中nextTick()异步更新队列源码解析
- vs2013找不着mfc120d.lib咋办
- 面试问题(二)