基于Tensorflow实现基本的线性回归(Linear regression)
来源:互联网 发布:东方的漫画软件 编辑:程序博客网 时间:2024/05/19 20:59
线性回归(Linear_regression)
本文基于Tensorflow实现基本的线性回归
代码参考GitHub [Tensorflow学习 ]
代码参考GitHub [Tensorflow-Examples ]
1.numpy导入数据
train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167, 7.042,10.791,5.313,7.997,5.654,9.27,3.1]) train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221, 2.827,3.465,1.65,2.904,2.42,2.94,1.3])#导入17个 train_x和train_y 数据 n_samples = train_X.shape[0] #得到数据train_x 的个数
当set 表示二维数组 [[1,2],[3,4],[5,6],[7,8]]
set.shape[0] 求得数组的行数
set.shape[1] 求得数组的列数
set.shape 求得数组形状
2.设置学习率和设置权重 偏差的占位符
learning_rate = 0.01 #设置学习率training_epochs = 1000 #设置训练步数display_step = 50 #设置结果显示步数# X Y的占位符,设置成32位浮点数X = tf.placeholder(tf.float32)Y = tf.placeholder(tf.float32)# 设置随机权重(weight),设置偏差(bias)为零W = tf.Variable(tf.random_uniform([1]))b = tf.Variable(tf.zeros([1]))
3.最小化误差
# 构造线性模型 y = x*w + bpred = tf.add(tf.multiply(X, W), b)# 计算均方误差 cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)# 梯度下降 Gradient descent# Note, minimize() knows to modify W and b because Variable objects are trainable=True by defaultoptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)# 初始化全部变量init = tf.global_variables_initializer()
4.开始训练
# Start trainingwith tf.Session() as sess: # Run the initializer sess.run(init) # Fit all training data for epoch in range(training_epochs): sess.run(optimizer, feed_dict={X:train_X, Y: train_Y}) # Display logs per epoch step if (epoch+1) % display_step == 0: c = sess.run(cost, feed_dict={X: train_X, Y:train_Y}) print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \ "W=", sess.run(W), "b=", sess.run(b)) print("Optimization Finished!") training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y}) print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')
训练结果显示
5.显示图案
显示前要在代码上加入 import matplotlib.pyplot as plt
# Graphic display plt.plot(train_X, train_Y, 'ro', label='Original data') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line') plt.legend() plt.show()
6.测试训练出的方程,在测试集上的准确率
# Testing example, as requested (Issue #2) test_X = numpy.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1]) test_Y = numpy.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03]) print("Testing... (Mean square loss Comparison)") testing_cost = sess.run( tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * test_X.shape[0]), feed_dict={X: test_X, Y: test_Y}) # same function as cost above print("Testing cost=", testing_cost) print("Absolute mean square loss difference:", abs( training_cost - testing_cost)) plt.plot(test_X, test_Y, 'bo', label='Testing data') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line') plt.legend() plt.show()
7.最后的代码
import tensorflow as tfimport numpyimport matplotlib.pyplot as plt# Parameterslearning_rate = 0.01training_epochs = 1000display_step = 50# Training Datatrain_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167, 7.042,10.791,5.313,7.997,5.654,9.27,3.1])train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221, 2.827,3.465,1.65,2.904,2.42,2.94,1.3])n_samples = train_X.shape[0]# tf Graph InputX = tf.placeholder(tf.float32)Y = tf.placeholder(tf.float32)# Set model weightsW = tf.Variable(tf.random_uniform([1]))b = tf.Variable(tf.zeros([1]))# Construct a linear modelpred = tf.add(tf.multiply(X, W), b)# Mean squared errorcost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)# Gradient descent# Note, minimize() knows to modify W and b because Variable objects are trainable=True by defaultoptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)# Initialize the variables (i.e. assign their default value)init = tf.global_variables_initializer()# Start trainingwith tf.Session() as sess: # Run the initializer sess.run(init) # Fit all training data for epoch in range(training_epochs): sess.run(optimizer, feed_dict={X:train_X, Y: train_Y}) # Display logs per epoch step if (epoch+1) % display_step == 0: c = sess.run(cost, feed_dict={X: train_X, Y:train_Y}) print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \ "W=", sess.run(W), "b=", sess.run(b)) print("Optimization Finished!") training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y}) print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n') # Graphic display plt.plot(train_X, train_Y, 'ro', label='Original data') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line') plt.legend() plt.show() # Testing example, as requested (Issue #2) test_X = numpy.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1]) test_Y = numpy.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03]) print("Testing... (Mean square loss Comparison)") testing_cost = sess.run( tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * test_X.shape[0]), feed_dict={X: test_X, Y: test_Y}) # same function as cost above print("Testing cost=", testing_cost) print("Absolute mean square loss difference:", abs( training_cost - testing_cost)) plt.plot(test_X, test_Y, 'bo', label='Testing data') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line') plt.legend() plt.show()
阅读全文
0 0
- 基于Tensorflow实现基本的线性回归(Linear regression)
- 【scala】【spark】基于随机梯度下降的简单线性回归编程实现:Linear Regression - SGD
- Linear Regression 线性回归
- linear regression 线性回归
- 线性回归 Linear Regression
- Linear regression(线性回归)
- 线性回归Linear Regression
- 线性回归 linear regression
- 线性回归(Linear Regression)
- 线性回归(linear regression)
- 线性回归(Linear Regression)
- 线性回归(Linear Regression)
- Linear Regression 线性回归 matlab实现
- 线性回归(linear regression)-matlab实现
- Linear Regression 线性回归sklearn python实现
- 线性回归(linear regression)
- 线性回归(linear regression)
- 线性回归(Linear Regression)
- leetcode 75. Sort Colors 很不错的3种元素排序方法 + O(n)
- A Simple Problem with Integers 区间更新和查询
- 设计模式学习笔记(工厂模式)
- EF映射
- 最长上升子序列(LIS)
- 基于Tensorflow实现基本的线性回归(Linear regression)
- hdu 6197array array array(最长不下降子序列nlogn)
- 07-异常
- 【React Native】react-Navigation之StackNavigator
- java jdbc 增删改查
- Leetcode OJ 67 Add Binary [Easy]
- C++网络编程之Socket编程
- Java作业-元素检索与交换
- 总结