scikit-learn 回归基础 分类:机器学习Sklearn
来源:互联网 发布:农村淘宝咨询电话 编辑:程序博客网 时间:2024/05/21 22:37
numpy ndarray属性:shape dtype
有切片运算
当对赋值后的容器部分元素进行修改,影响原ndarray。
有切片运算
当对赋值后的容器部分元素进行修改,影响原ndarray。
ndarray Array creation routines:
可以用来构造ndarray的一些常用接口(Ones and zeros and so on)
一些ndarray的接口:
ndarray.T(转置)
ndarray.flat: 类似于指针,文档中将其类比于迭代器。(这当然在两种语言中可以)
flat具有赋值功能,index与向量初始化顺序有关。
ndarray.item: 行为与直接取值运算相同,结果不同在于返回python 内置类型而非array数据类型。
可以用来构造ndarray的一些常用接口(Ones and zeros and so on)
一些ndarray的接口:
ndarray.T(转置)
ndarray.flat: 类似于指针,文档中将其类比于迭代器。(这当然在两种语言中可以)
flat具有赋值功能,index与向量初始化顺序有关。
ndarray.item: 行为与直接取值运算相同,结果不同在于返回python 内置类型而非array数据类型。
由于直接学习numpy比较没有针对性,现考虑利用numpy及scipy进行scikit-learn一些结果的搭建。
1.1.Generalized Linear Models
一般最小二乘:
模型的求解是求导
一般最小二乘:
模型的求解是求导
- from sklearn import linear_model
- clf = linear_model.LinearRegression()
- print clf.fit([[0, 2], [1, 1], [2, 0]], [0, 1, 2])
- print clf.coef_
- print clf.intercept_
- print “\n” * 5
- import numpy as np
- from numpy.linalg import inv
- from numpy import linalg
- x = np.array([[0, 2], [1, 1], [2, 0]])
- y = np.array([0, 1, 2])
- def LinearRegression(X, y):
- # X_bar is the matrix for [1 X]
- X_bar = np.ones([X.shape[0], X.shape[1] + 1])
- X_bar[:,1:] = X
- if linalg.matrix_rank(X_bar) == min(X_bar.shape):
- X = X_bar
- return np.dot(np.dot(inv(np.dot(X.T, X)), X.T), y)
- print LinearRegression(x, y)
from sklearn import linear_modelclf = linear_model.LinearRegression()print clf.fit([[0, 2], [1, 1], [2, 0]], [0, 1, 2])print clf.coef_print clf.intercept_print "\n" * 5import numpy as npfrom numpy.linalg import invfrom numpy import linalgx = np.array([[0, 2], [1, 1], [2, 0]])y = np.array([0, 1, 2])def LinearRegression(X, y): # X_bar is the matrix for [1 X] X_bar = np.ones([X.shape[0], X.shape[1] + 1]) X_bar[:,1:] = X if linalg.matrix_rank(X_bar) == min(X_bar.shape): X = X_bar return np.dot(np.dot(inv(np.dot(X.T, X)), X.T), y)print LinearRegression(x, y)
简单地讲,一般最小二乘要求设计阵非(渐近)线性相关。
岭回归:这部分diy代码与调用模块结果不太一样:
- from sklearn import linear_model
- clf = linear_model.Ridge(alpha = 0.5)
- print clf.fit([[0, 0], [0, 0], [1, 1]], [0, 0.1, 1])
- print clf.coef_
- print clf.intercept_
- import numpy as np
- from numpy.linalg import inv
- x = np.array([[0,0], [0,0], [1,1]])
- y = np.array([0, 0.1, 1])
- def Ridge(alpha ,X, y):
- X_bar = np.ones([X.shape[0], X.shape[1] + 1])
- X_bar[:,1:] = X
- X = X_bar
- return np.dot(np.dot(inv(alpha * np.eye(X.shape[0]) + np.dot(X.T, X)), X.T), y)
- print Ridge(0.5, x, y)
from sklearn import linear_modelclf = linear_model.Ridge(alpha = 0.5)print clf.fit([[0, 0], [0, 0], [1, 1]], [0, 0.1, 1])print clf.coef_print clf.intercept_import numpy as npfrom numpy.linalg import invx = np.array([[0,0], [0,0], [1,1]])y = np.array([0, 0.1, 1])def Ridge(alpha ,X, y): X_bar = np.ones([X.shape[0], X.shape[1] + 1]) X_bar[:,1:] = X X = X_bar return np.dot(np.dot(inv(alpha * np.eye(X.shape[0]) + np.dot(X.T, X)), X.T), y)print Ridge(0.5, x, y)
利用求导方法将一般最小二乘推广到岭回归,可以直接得到。
由于岭回归与约束条件最小二乘等价,模型选择是一个问题。
岭回归是一个具有超参数的方法,超参数的选择可以通过交叉验证完成,即所谓GCV( generalized Cross-Validation)
这种交叉验证有简单的形式 https://en.wikipedia.org/wiki/Cross-validation_(statistics)
Leave-p-out cross-validation 可以进行组合数的交叉验证 利用MSE
岭回归是一个具有超参数的方法,超参数的选择可以通过交叉验证完成,即所谓GCV( generalized Cross-Validation)
这种交叉验证有简单的形式 https://en.wikipedia.org/wiki/Cross-validation_(statistics)
Leave-p-out cross-validation 可以进行组合数的交叉验证 利用MSE
岭回归是在矩阵“可逆”与“精确性”间选择。这里默认地选择了p=1的形式。
- from sklearn import linear_model
- clf = linear_model.RidgeCV(alphas = [0.01,0.1, 1.0, 10.0])
- print clf.fit([[0,0], [0,0], [1,1]], [0, 0.1, 1])
- print clf.alpha_
- import numpy as np
- from numpy.linalg import inv, norm
- x = np.array([[0, 0], [0, 0], [1, 1]])
- y = [0, 0.1, 1]
- alphas = [0.01,0.1, 1.0, 1]
- def RidgeCV(alphas, X, y):
- X_bar = np.ones([X.shape[0], X.shape[1] + 1])
- X_bar[:,1:] = X
- def MSE(alpha):
- para = np.dot(np.dot(inv(np.dot(X_bar.T, X_bar) + np.eye(X_bar.shape[1]) * alpha), X_bar.T), y)
- return norm(np.dot(X_bar, para) - y)
- min_alpha_MSE = []
- for element in alphas:
- MSE_val = MSE(element)
- if not min_alpha_MSE:
- min_alpha_MSE = [element, MSE_val]
- elif MSE_val < min_alpha_MSE[1]:
- min_alpha_MSE = [element, MSE_val]
- return min_alpha_MSE
- print RidgeCV(alphas, x, y)
from sklearn import linear_modelclf = linear_model.RidgeCV(alphas = [0.01,0.1, 1.0, 10.0])print clf.fit([[0,0], [0,0], [1,1]], [0, 0.1, 1])print clf.alpha_import numpy as npfrom numpy.linalg import inv, normx = np.array([[0, 0], [0, 0], [1, 1]])y = [0, 0.1, 1]alphas = [0.01,0.1, 1.0, 1]def RidgeCV(alphas, X, y): X_bar = np.ones([X.shape[0], X.shape[1] + 1]) X_bar[:,1:] = X def MSE(alpha): para = np.dot(np.dot(inv(np.dot(X_bar.T, X_bar) + np.eye(X_bar.shape[1]) * alpha), X_bar.T), y) return norm(np.dot(X_bar, para) - y) min_alpha_MSE = [] for element in alphas: MSE_val = MSE(element) if not min_alpha_MSE: min_alpha_MSE = [element, MSE_val] elif MSE_val < min_alpha_MSE[1]: min_alpha_MSE = [element, MSE_val] return min_alpha_MSEprint RidgeCV(alphas, x, y)
下面要进行关于lasso回归的讨论,其单纯的是将岭回归的二范数约束变为一范数,由于尖点不可导,
应该讨论最优化问题,下面先对scipy中的优化方式进行简单介绍。
下面是一个无约束条件的最优问题的求解代码:
- import numpy as np
- from scipy.optimize import minimize
- def rosen(x):
- return sum(100*(x[1:] - x[:-1]**2)**2 + (1 - x[:-1])**2)
- x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
- res = minimize(rosen, x0, method=’nelder-mead’, \
- options = {’xtol’:1e-8, ‘disp’: True})
- print res.x
import numpy as npfrom scipy.optimize import minimizedef rosen(x): return sum(100*(x[1:] - x[:-1]**2)**2 + (1 - x[:-1])**2)x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])res = minimize(rosen, x0, method='nelder-mead', \ options = {'xtol':1e-8, 'disp': True})print res.x
这里nelder-mead是指使用单纯形法进行求解。(求解线性规划的算法,线性目标函数,线性不等式约束)
虽然不知道为什么要对非线性问题使用。
options中xtol为停止迭代的自变量误差,disp是否显示拟合结果,x0初值点。
虽然不知道为什么要对非线性问题使用。
options中xtol为停止迭代的自变量误差,disp是否显示拟合结果,x0初值点。
下面再见一个非线性规划的代码:
- def func(x, sign = 1.0):
- return sign * (2 * x[0] * x[1] + 2 * x[0] - x[0]**2 - 2*x[1]**2)
- def func_deriv(x, sign = 1.0):
- dfdx0 = sign * (-2 * x[0] + 2 * x[1] + 2)
- dfdx1 = sign * (2 * x[0] - 4 * x[1])
- return np.array([dfdx0, dfdx1])
- cons = ({’type’: ‘eq’,
- ’fun’: lambda x:np.array([x[0]**3 - x[1]]),
- ’jac’: lambda x:np.array([3.0 * (x[0]**2), -1.0])
- }, {’type’: ‘ineq’,
- ’fun’: lambda x:np.array([x[1] - 1]),
- ’jac’: lambda x:np.array([0.0, 1.0])
- })
- res = minimize(func, [-1.0, 1.0], args = (-1.0,), jac = func_deriv, \
- method = ’SLSQP’, options = {‘disp’: True, })
- print res.x
- res = minimize(func, [-1.0, 1.0], args = (-1.0,), jac = func_deriv, \
- constraints = cons ,method = ’SLSQP’, options = {‘disp’: True, })
- print res.x
def func(x, sign = 1.0): return sign * (2 * x[0] * x[1] + 2 * x[0] - x[0]**2 - 2*x[1]**2)def func_deriv(x, sign = 1.0): dfdx0 = sign * (-2 * x[0] + 2 * x[1] + 2) dfdx1 = sign * (2 * x[0] - 4 * x[1]) return np.array([dfdx0, dfdx1])cons = ({'type': 'eq', 'fun': lambda x:np.array([x[0]**3 - x[1]]), 'jac': lambda x:np.array([3.0 * (x[0]**2), -1.0]) }, {'type': 'ineq', 'fun': lambda x:np.array([x[1] - 1]), 'jac': lambda x:np.array([0.0, 1.0]) })res = minimize(func, [-1.0, 1.0], args = (-1.0,), jac = func_deriv, \ method = 'SLSQP', options = {'disp': True, })print res.xres = minimize(func, [-1.0, 1.0], args = (-1.0,), jac = func_deriv, \ constraints = cons ,method = 'SLSQP', options = {'disp': True, })print res.x
这里使用SLSQP(细节并不明确)jac是导函数。
下面尝试对lasso回归使用此种优化方法求解。
下面尝试对lasso回归使用此种优化方法求解。
lasso回归代码:
这里对diy代码的约束进行了较大扩展,接近真值,但是否过拟合?
后面还对岭回归进行了非线性求解模拟,仅仅是norm不同。
这里对diy代码的约束进行了较大扩展,接近真值,但是否过拟合?
后面还对岭回归进行了非线性求解模拟,仅仅是norm不同。
- from sklearn import linear_model
- clf = linear_model.Lasso(alpha = 0.1)
- clf.fit([[0, 0], [1, 1]], [0, 1])
- print clf.predict([[1, 1]])
- print clf.predict([[0, 0]])
- import numpy as np
- from numpy.linalg import norm
- from scipy.optimize import minimize
- import functools
- def func(X ,y, beta):
- X_bar = np.ones([X.shape[0], X.shape[1] + 1])
- X_bar[:,1:] = X
- return norm(np.dot(X_bar, beta) - y) / (2 * X.shape[0])
- #set alpha to 0.1
- cons = ({
- ’type’: ‘ineq’,
- ’fun’: lambda beta:np.array([10 - norm(beta, 1)])
- },)
- X = np.array([[0, 0], [1, 1]])
- y = np.array([0, 1])
- func_0 = functools.partial(func, X)
- func_1 = functools.partial(func_0, y)
- #func_1 has the only para beta
- res = minimize(func_1, [0, 0, 0], constraints = cons, method = ‘SLSQP’,\
- options = {’disp’: True})
- beta = res.x
- print beta
- print np.dot(np.array([1, 1, 1]), beta)
- print np.dot(np.array([1, 0, 0]), beta)
- #use this method to ridge regression
- cons = ({
- ’type’: ‘ineq’,
- ’fun’: lambda beta:np.array([10 - norm(beta)])
- },)
- res = minimize(func_1, [0, 0, 0], constraints = cons, method = ‘SLSQP’,\
- options = {’disp’: True})
- beta = res.x
- print beta
- print np.dot(np.array([1, 1, 1]), beta)
- print np.dot(np.array([1, 0, 0]), beta)
- clf = linear_model.Ridge(alpha = 0.1)
- clf.fit(X, y)
- print clf.intercept_
- print clf.coef_
- print clf.intercept_ + np.dot(clf.coef_, np.array([1, 1]))
- print clf.intercept_ + np.dot(clf.coef_, np.array([0, 0]))
from sklearn import linear_modelclf = linear_model.Lasso(alpha = 0.1)clf.fit([[0, 0], [1, 1]], [0, 1])print clf.predict([[1, 1]])print clf.predict([[0, 0]])import numpy as npfrom numpy.linalg import normfrom scipy.optimize import minimizeimport functoolsdef func(X ,y, beta): X_bar = np.ones([X.shape[0], X.shape[1] + 1]) X_bar[:,1:] = X return norm(np.dot(X_bar, beta) - y) / (2 * X.shape[0])
对于lasso回归具有稀疏解的问题,其直观的理解为对二维等高线及约束的图像观察,大多数取约束尖点为
最优解的原因,为登高圆圆心以概率1与约束多边形中心不垂直。(http://blog.csdn.net/xidianzhimeng/article/details/20856047)
由于很可能通过lasso回归得到稀疏解,故其合适对高维数据进行特征选取。(高维因子模型降维)
具体细节以后展开。(L1-based feature selection)
最优解的原因,为登高圆圆心以概率1与约束多边形中心不垂直。(http://blog.csdn.net/xidianzhimeng/article/details/20856047)
由于很可能通过lasso回归得到稀疏解,故其合适对高维数据进行特征选取。(高维因子模型降维)
具体细节以后展开。(L1-based feature selection)
lasso回归参数alpha称为稀疏度,其的选取类似于岭回归交叉验证,细节暂时不做讨论。(还有AIC BIC等选择方法)
将lasso与Ridge进行结合得到的Elastic Net方法。下面是实现:
- from sklearn import linear_model
- clf = linear_model.ElasticNet(l1_ratio = 0.1)
- clf.fit([[108, 33], [44, 10001]], [861, 444])
- print clf.coef_
- print clf.intercept_
- print clf.predict([[108, 33]])
- print clf.predict([[44, 10001]])
- import numpy as np
- from numpy.linalg import norm
- from scipy.optimize import minimize
- import functools
- def func(X, y, beta):
- X_bar = np.ones([X.shape[0], X.shape[1] + 1])
- X_bar[:,1:] = X
- return norm(np.dot(X_bar, beta) - y) / (2 * X.shape[0])
- alpha = 1.0
- l1_ratio = 0.1
- penalty = 10
- cons = ({
- ’type’:‘ineq’,
- ’fun’: lambda beta: penalty - alpha * l1_ratio * norm(beta, 1) - alpha * (1 - l1_ratio) * norm(beta) / 2
- },)
- X = np.array([[108, 33], [44, 10001]])
- y = np.array([861, 444])
- func_0 = functools.partial(func, X)
- func_1 = functools.partial(func_0, y)
- res = minimize(func_1, [0, 8, 1], constraints = cons, method = ‘SLSQP’,\
- options = {’disp’: True})
- print res.x
- print np.dot([1 ,108, 33], res.x)
- print np.dot([1 ,44, 10001], res.x)
from sklearn import linear_modelclf = linear_model.ElasticNet(l1_ratio = 0.1)clf.fit([[108, 33], [44, 10001]], [861, 444])print clf.coef_print clf.intercept_print clf.predict([[108, 33]])print clf.predict([[44, 10001]])printimport numpy as npfrom numpy.linalg import normfrom scipy.optimize import minimizeimport functoolsdef func(X, y, beta): X_bar = np.ones([X.shape[0], X.shape[1] + 1]) X_bar[:,1:] = X return norm(np.dot(X_bar, beta) - y) / (2 * X.shape[0])alpha = 1.0l1_ratio = 0.1penalty = 10cons = ({ 'type':'ineq', 'fun': lambda beta: penalty - alpha * l1_ratio * norm(beta, 1) - alpha * (1 - l1_ratio) * norm(beta) / 2 },)X = np.array([[108, 33], [44, 10001]])y = np.array([861, 444])func_0 = functools.partial(func, X)func_1 = functools.partial(func_0, y)res = minimize(func_1, [0, 8, 1], constraints = cons, method = 'SLSQP',\ options = {'disp': True})print res.xprint np.dot([1 ,108, 33], res.x)print np.dot([1 ,44, 10001], res.x)
这里predict值都很接近自变量,但是在参数方面不太统一。
但设定为不估计常数项后结果非常接近。
但设定为不估计常数项后结果非常接近。
对于lasso回归应用于多元情况(因变量为矩阵),仅仅是将维数进行了
扩充,但在代码实现上要注意,最优化不等式约束仅仅能对向量进行,对于矩阵,要将其拉直为向量
形式(否则出错),下面的最优化diy代码对其进行了实践。
并将参数的差距与ols形式(无偏估计)进行了比较。
- import numpy as np
- from numpy.random import normal, poisson, multivariate_normal
- from numpy.linalg import norm, inv
- from functools import partial
- from scipy.optimize import minimize
- X = np.zeros([5, 5])
- W = np.zeros([5, 5])
- for i in range(5):
- X[i:] = normal(10, 3, 5)
- W[i:] = poisson(10, 5)
- Y = np.dot(X, W) + multivariate_normal(np.ones(5), np.eye(5), 5)
- def func(X, Y, w):
- W = np.zeros([5, 5])
- for i in range(len(w)):
- first, second = divmod(i, 5)
- W[first][second] = w[i]
- return norm(np.dot(X, W) - Y) / (2 * X.shape[0])
- penalty = 10
- alpha = 0.01
- def strain(penalty, alpha, w):
- W = np.zeros([5, 5])
- for i in range(len(w)):
- first, second = divmod(i, 5)
- W[first][second] = w[i]
- return penalty - alpha * sum(np.diag(np.dot(W.T, W)) ** 0.5)
- strain_0 = partial(strain, penalty)
- strain_1 = partial(strain_0, alpha)
- cons = ({’type’:‘ineq’,
- ’fun’: strain_1},)
- func_0 = partial(func, X)
- func_1 = partial(func_0, Y)
- res = minimize(func_1, np.zeros(5 * 5), constraints = cons, method = ‘SLSQP’,\
- options = {’disp’: True})
- W_bar = np.zeros([5, 5])
- for i in range(len(res.x)):
- first, second = divmod(i, 5)
- W_bar[first][second] = res.x[i]
- print norm(W)
- print norm(W - W_bar)
- print norm(W - np.dot(np.dot(inv(np.dot(X.T, X)), X.T), Y))
import numpy as npfrom numpy.random import normal, poisson, multivariate_normalfrom numpy.linalg import norm, invfrom functools import partialfrom scipy.optimize import minimizeX = np.zeros([5, 5])W = np.zeros([5, 5])for i in range(5): X[i:] = normal(10, 3, 5) W[i:] = poisson(10, 5)Y = np.dot(X, W) + multivariate_normal(np.ones(5), np.eye(5), 5)def func(X, Y, w): W = np.zeros([5, 5]) for i in range(len(w)): first, second = divmod(i, 5) W[first][second] = w[i] return norm(np.dot(X, W) - Y) / (2 * X.shape[0])penalty = 10alpha = 0.01def strain(penalty, alpha, w): W = np.zeros([5, 5]) for i in range(len(w)): first, second = divmod(i, 5) W[first][second] = w[i] return penalty - alpha * sum(np.diag(np.dot(W.T, W)) ** 0.5)strain_0 = partial(strain, penalty)strain_1 = partial(strain_0, alpha)cons = ({'type':'ineq', 'fun': strain_1},)func_0 = partial(func, X)func_1 = partial(func_0, Y)res = minimize(func_1, np.zeros(5 * 5), constraints = cons, method = 'SLSQP',\ options = {'disp': True})W_bar = np.zeros([5, 5])for i in range(len(res.x)): first, second = divmod(i, 5) W_bar[first][second] = res.x[i]print norm(W)print norm(W - W_bar)print norm(W - np.dot(np.dot(inv(np.dot(X.T, X)), X.T), Y))
从结果来看alpha较小时diy multi-lasso 收敛到ols,与理论相符。
下面实现了其的sklearn版本及DIY版本,注意在自定义代码中常数项的加法,对数据阵的加法与一元情况相同,
对参数阵比数据阵第二维度大(可以推出)
对参数阵比数据阵第二维度大(可以推出)
- rom sklearn import linear_model
- clf = linear_model.MultiTaskLasso(alpha = 0.1)
- clf.fit([[0,0], [1,1], [2,2]], [[0,0], [1,1], [2,2]])
- print clf.coef_
- print clf.intercept_
- print “\n”
- import numpy as np
- from numpy.linalg import norm
- from scipy.optimize import minimize
- from functools import partial
- import copy
- def func(X, Y, w):
- W = np.resize(w, [X.shape[1] + 1, Y.shape[1]])
- X_bar = np.ones([X.shape[0], X.shape[1] + 1])
- X_bar[:,1:] = X
- return norm(np.dot(X_bar, W) - Y) / (2 * X.shape[0])
- penalty = 1
- def strain(penalty , first_dim, second_dim ,w):
- W = np.resize(w ,[first_dim, second_dim])
- return penalty - sum(np.diag(np.dot(W.T, W)) ** 0.5)
- X = np.array([[0,0], [1,1], [2,2]])
- Y = copy.deepcopy(X)
- strain_0 = partial(strain, penalty)
- strain_1 = partial(strain_0, X.shape[1] + 1)
- strain_2 = partial(strain_1, Y.shape[1])
- cons = ({’type’: ‘ineq’,
- ’fun’: strain_2},)
- func_0 = partial(func, X)
- func_1 = partial(func_0, Y)
- res = minimize(func_1, np.zeros([(X.shape[1] + 1) * Y.shape[1]]),\
- method = ’SLSQP’, constraints = cons, options = {‘disp’: True})
- print res.x.reshape(X.shape[1] + 1, Y.shape[1])
rom sklearn import linear_modelclf = linear_model.MultiTaskLasso(alpha = 0.1)clf.fit([[0,0], [1,1], [2,2]], [[0,0], [1,1], [2,2]])print clf.coef_print clf.intercept_print "\n"import numpy as npfrom numpy.linalg import normfrom scipy.optimize import minimizefrom functools import partialimport copydef func(X, Y, w): W = np.resize(w, [X.shape[1] + 1, Y.shape[1]]) X_bar = np.ones([X.shape[0], X.shape[1] + 1]) X_bar[:,1:] = X return norm(np.dot(X_bar, W) - Y) / (2 * X.shape[0])penalty = 1def strain(penalty , first_dim, second_dim ,w): W = np.resize(w ,[first_dim, second_dim]) return penalty - sum(np.diag(np.dot(W.T, W)) ** 0.5)X = np.array([[0,0], [1,1], [2,2]])Y = copy.deepcopy(X)strain_0 = partial(strain, penalty)strain_1 = partial(strain_0, X.shape[1] + 1)strain_2 = partial(strain_1, Y.shape[1])cons = ({'type': 'ineq', 'fun': strain_2},)func_0 = partial(func, X)func_1 = partial(func_0, Y)res = minimize(func_1, np.zeros([(X.shape[1] + 1) * Y.shape[1]]),\ method = 'SLSQP', constraints = cons, options = {'disp': True})print res.x.reshape(X.shape[1] + 1, Y.shape[1])
利用np.resize()进行横向拉直拷贝。
更多了解请浏览:http://blog.csdn.net/sinat_30665603
阅读全文
0 0
- scikit-learn 回归基础 分类:机器学习Sklearn
- scikit-learn svm初探 分类:机器学习 Sklearn
- scikit learn(sklearn)机器学习算法选择
- scikit-learn 即sklearn的基础学习
- scikit-learn 交叉验证绘图及原理实践 分类:机器学习Sklearn
- 机器学习决策树:sklearn分类和回归
- scikit-learn 回归基础
- python/scikit-learn机器学习库(回归分析)
- <机器学习笔记-02> <scikit-learn -02>线性回归
- <机器学习笔记-04 ><scikit-learn 04>逻辑回归
- [通过scikit-learn掌握机器学习] 02 线性回归
- scikit-learn 机器学习
- <机器学习笔记-01><scikit-learn 01>机器学习基础
- 机器学习基础 维基翻译 保序回归 随机森林 Pipeline处理 及简单的sklearn例子 分类:机器学习Sklearn
- 基于scikit-learn(sklearn)做分类--2.分类
- Windows下安装Python机器学习相关的sklearn(scikit-learn)模块
- [通过scikit-learn掌握机器学习] 01基础
- 机器学习scikit-learn入门教程
- dubbo-如何搭建maven项目,并实现Dubbo分布式服务管理
- 云数据中心安全设计要点
- leetcode-4 Median of Two Sorted Arrays
- URLEncoder与URLDecoder
- 下载Spring项目源码并 编译为Eclipse
- scikit-learn 回归基础 分类:机器学习Sklearn
- 简单错误记录(通过70%)
- java软件工程师面试的内容
- 如何汇报工作与计划
- 从零基础学习Java 要知道Java中的一些规则
- 【Java 进阶】Java 反射
- 96. Unique Binary Search Trees
- 字典树初级
- 淘宝开源库VLayout实践