梯度下降法Python代码

来源:互联网 发布:linux vim 乱码命令 编辑:程序博客网 时间:2024/06/12 00:52

前一篇已经总结了梯度下降法,今天尝试将代码用Python实现,之所以选择Python是因为用python写的代码可以短一些=。=

如果哪里不对了,希望可以帮我纠正~~

首先是批量随机梯度法,适用于训练样本数目不是特别多的情况,而且可以用于样本特征数目n更多的情况:

# -*- coding: cp936 -*-#Training data set#-----by plzimport numpy as npdef linear_regression(data_x,data_y,m,n,alpha):    ones=np.ones(m)    data_x=np.column_stack((data_x,ones))    theta_guess=np.ones(n+1)   #有n个特征,theta就是n+1维    x_trans=data_x.transpose()   #data_x的转置矩阵    theta_last=np.zeros(n+1)    theta_loss=theta_guess-theta_last    theta_losssq=np.sum(theta_loss**2)    while(np.sum((theta_guess-theta_last)**2)>0.0000000001):        print "np.sum((theta_guess-theta_last)**2) is:" , np.sum((theta_guess-theta_last)**2)                                                                  theta_last=theta_guess        hypothesis=np.dot(data_x,theta_guess)        loss=hypothesis-data_y               cost = np.sum(loss ** 2)        print "cost is" , cost        gradient=np.dot(x_trans,loss)        theta_guess=theta_guess-alpha*gradient           return theta_guessdata_x=np.array([(0,1),(1,2),(2,3)])data_y=np.array([4.9,7.8,11.3])m=data_x.shape[0]   #训练样本数n=data_x.shape[1]   #样本特征数alpha=0.001theta=linear_regression(data_x,data_y,m,n,alpha)print theta

其次是随机梯度下降法,适用于样本数目成千上万的情况,相比批量梯度下降法,随机梯度下降法只能在最优值附近徘徊,并不能收敛到最优,不过由于它的速度比较快,效果也还不错~

#------by plzimport numpy as npdef linear_regression(data_x,data_y,m,n,alpha):    theta=np.ones(n+1)    for j in range(1,m):        hypothesis=np.dot(data_x[j],theta)        loss=hypothesis-data_y[j]        gradient=np.dot(loss,data_x[j])        theta=theta-alpha*gradient    return theta#这里只是用11个样本的数据集举个例子,实际上需要大量样本data_x=np.array([0,1,2,3,4,5,6,7,8,9,10])data_y=np.array([95.3,97.2,60.1,49.34,37.4,51.05,25.5,5.25,0.63,-9.4,-4.38])m=11n=1theta=linear_regression(data_x,data_y,m,n,0.001)print theta

哪里不对了一定要提出来~一起进步~



0 0
原创粉丝点击