[action] deep learning 深度学习 tensorflow 实战(2) 实现简单神经网络以及随机梯度下降算法S.G.D
来源:互联网 发布:unity3d 积木游戏 编辑:程序博客网 时间:2024/06/06 06:55
在之前的实战(1) 中,我们将数据清洗整理后,得到了'notMNIST.pickle'数据。
本文将阐述利用tensorflow创建一个简单的神经网络以及随机梯度下降算法。
# These are all the modules we'll be using later. Make sure you can import them# before proceeding further.from __future__ import print_functionimport numpy as npimport tensorflow as tffrom six.moves import cPickle as picklefrom six.moves import range首先,载入之前整理好的数据'notMNIST.pickle'。(在实战(1)中得到的)
pickle_file = 'notMNIST.pickle'with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory 帮助回收内存 print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape)运行结果为:
Training set (200000, 28, 28) (200000,)Validation set (10000, 28, 28) (10000,)Test set (10000, 28, 28) (10000,)
下一步转换数据格式。
将图像拉成一维数组。
dataset成为二维数组。
label也成为二位数组。
0 对应[1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0]
1 对应[0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0]
image_size = 28num_labels = 10def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # -1 means unspecified value adaptive # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labelstrain_dataset, train_labels = reformat(train_dataset, train_labels)valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)test_dataset, test_labels = reformat(test_dataset, test_labels)print('Training set', train_dataset.shape, train_labels.shape)print('Validation set', valid_dataset.shape, valid_labels.shape)print('Test set', test_dataset.shape, test_labels.shape)运行结果为:
Training set (200000, 784) (200000, 10)Validation set (10000, 784) (10000, 10)Test set (10000, 784) (10000, 10)
tensorflow 这样工作: 首先描述你的输入,变量,以及操作。这些组成了计算图。 之后的操作要在这个block下面进行。
比如:
with graph.as_default(): ...
然后可以用命令session.run()运行你定义的操作。 上下文管理器用来定义session.你所定义的操作也一定要在session的block下面。
with tf.Session(graph=graph) as session: ...这时我们可以载入数据进行训练啦。
# With gradient descent training, even this much data is prohibitive.# Subset the training data for faster turnaround.train_subset = 10000graph = tf.Graph()with graph.as_default(): # Input data. 定义输入数据并载入 -----------------------------------------1 # Load the training, validation and test data into constants that are # attached to the graph. tf_train_dataset = tf.constant(train_dataset[:train_subset, :]) tf_train_labels = tf.constant(train_labels[:train_subset]) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables.定义变量 要训练得到的参数weight, bias ----------------------------------------2 # These are the parameters that we are going to be training. The weight # matrix will be initialized using random values following a (truncated) # normal distribution. The biases get initialized to zero. weights = tf.Variable(tf.truncated_normal([image_size * image_size, num_labels])) # changing when training biases = tf.Variable(tf.zeros([num_labels])) # changing when training # tf.truncated_normal # tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) # Outputs random values from a truncated normal distribution. # The generated values follow a normal distribution with specified mean and # standard deviation, except that values whose magnitude is more than 2 standard # deviations from the mean are dropped and re-picked. # tf.zeros # tf.zeros([10]) <tf.Tensor 'zeros:0' shape=(10,) dtype=float32> # Training computation. 训练数据 ----------------------------------------3 # We multiply the inputs with the weight matrix, and add biases. We compute # the softmax and cross-entropy (it's one operation in TensorFlow, because # it's very common, and it can be optimized). We take the average of this # cross-entropy across all training examples: that's our loss. logits = tf.matmul(tf_train_dataset, weights) + biases # tf.matmul matrix multiply loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # compute average cross entropy loss # softmax_cross_entropy_with_logits # The activation ops provide different types of nonlinearities for use in neural # networks. These include smooth nonlinearities (`sigmoid`, `tanh`, `elu`, # `softplus`, and `softsign`), continuous but not everywhere differentiable # functions (`relu`, `relu6`, and `relu_x`), and random regularization (`dropout`). # tf.reduce_mean # tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None) # Computes the mean of elements across dimensions of a tensor. # Optimizer. -----------------------------------------4 # We are going to find the minimum of this loss using gradient descent. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # 0.5 means learning rate # tf.train.GradientDescentOptimizer( # tf.train.GradientDescentOptimizer(self, learning_rate, use_locking=False, name='GradientDescent') # Predictions for the training, validation, and test data.---------------------------------------5 # These are not part of training, but merely here so that we can report # accuracy figures as we train. train_prediction = tf.nn.softmax(logits) # weights and bias have been changed valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) # tf.nn.softmax # Returns: A `Tensor`. Has the same type as `logits`. Same shape as `logits`.(num, 784) *(784,10) + = (num, 10)下面进行简单的梯度下降,开始迭代。
num_steps = 801def accuracy(predictions, labels): ''' predictions = [0.8,0,0,0,0.1,0,0,0.1,0,0] labels = [1,0,0,0,0,0,0,0,0,0] ''' return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0])with tf.Session(graph=graph) as session: # This is a one-time operation which ensures the parameters get initialized as # we described in the graph: # random weights for the matrix, zeros for the biases. tf.initialize_all_variables().run() # initialize print('Initialized') for step in xrange(num_steps): # Run the computations. We tell .run() that we want to run the optimizer, # and get the loss value and the training predictions returned as numpy # arrays. _, l, predictions = session.run([optimizer, loss, train_prediction]) # using train_prediction to train and return prediction in train data set if (step % 100 == 0): print('Loss at step %d: %f' % (step, l)) print('Training accuracy: %.1f%%' % accuracy( predictions, train_labels[:train_subset, :])) # Calling .eval() on valid_prediction is basically like calling run(), but # just to get that one numpy array. Note that it recomputes all its graph # dependencies. print('Validation accuracy: %.1f%%' % accuracy(valid_prediction.eval(), valid_labels)) print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
运行结果如下:
Initialized
Loss at step 0: 17.639723
Training accuracy: 8.9%
Validation accuracy: 11.4%
Loss at step 100: 2.268863
Training accuracy: 71.8%
Validation accuracy: 70.8%
Loss at step 200: 1.818829
Training accuracy: 74.9%
Validation accuracy: 73.6%
Loss at step 300: 1.580101
Training accuracy: 76.5%
Validation accuracy: 74.5%
Loss at step 400: 1.419103
Training accuracy: 77.1%
Validation accuracy: 75.1%
Loss at step 500: 1.299344
Training accuracy: 77.7%
Validation accuracy: 75.3%
Loss at step 600: 1.205005
Training accuracy: 78.3%
Validation accuracy: 75.3%
Loss at step 700: 1.127984
Training accuracy: 78.8%
Validation accuracy: 75.5%
Loss at step 800: 1.063572
Training accuracy: 79.3%
Validation accuracy: 75.7%
Test accuracy: 82.6%
之后,我们可以用更快的优化算法,随机梯度算法进行训练。
graph的定义与之前类似,不同的是我们的训练数据是一小批一小批的。
所以要在运行session.run()时并导入小批量数据之前定义占位量(placeholder).。
batch_size = 128graph = tf.Graph()with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed ----------------------------------------1 # at run time with a training minibatch. # 相当于开辟空间 tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. ------------------------------------------2 weights = tf.Variable(tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. ------------------------------------------3 logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. -------------------------------------------4 optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. --------------------------------------------5 train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)下面是对应的训练操作代码:
num_steps = 3001with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. # 传递值到tf的命名空间 feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))运行结果如下:
InitializedMinibatch loss at step 0: 16.076256Minibatch accuracy: 14.1%Validation accuracy: 17.9%Minibatch loss at step 500: 1.690020Minibatch accuracy: 72.7%Validation accuracy: 75.1%Minibatch loss at step 1000: 1.430756Minibatch accuracy: 77.3%Validation accuracy: 76.1%Minibatch loss at step 1500: 1.065795Minibatch accuracy: 81.2%Validation accuracy: 77.0%Minibatch loss at step 2000: 1.248749Minibatch accuracy: 75.0%Validation accuracy: 77.3%Minibatch loss at step 2500: 0.934266Minibatch accuracy: 81.2%Validation accuracy: 78.1%Minibatch loss at step 3000: 1.047278Minibatch accuracy: 76.6%Validation accuracy: 78.4%Test accuracy: 85.4%
当然结果肯定会有所提升。
batch_size = 128hiden_layer_node_num = 1024graph = tf.Graph()with graph.as_default(): # input -----------------------------------------1 tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. ------------------------------------------2 weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, hiden_layer_node_num])) biases1 = tf.Variable(tf.zeros([hiden_layer_node_num])) # input layer output (batch_size, hiden_layer_node_num) weights2 = tf.Variable(tf.truncated_normal([hiden_layer_node_num, num_labels])) biases2 = tf.Variable(tf.zeros([num_labels])) # Training computation. ------------------------------------------3 logits = tf.matmul(tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1), weights2) + biases2 loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. -------------------------------------------4 optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. --------------------------------------------5 train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1), weights2) + biases2) test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1), weights2) + biases2)num_steps = 3001with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. # 传递值到tf的命名空间 feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))运行结果如下:
InitializedMinibatch loss at step 0: 379.534973Minibatch accuracy: 8.6%Validation accuracy: 21.7%Minibatch loss at step 500: 12.951815Minibatch accuracy: 86.7%Validation accuracy: 80.8%Minibatch loss at step 1000: 9.569818Minibatch accuracy: 82.8%Validation accuracy: 80.9%Minibatch loss at step 1500: 7.165316Minibatch accuracy: 84.4%Validation accuracy: 78.8%Minibatch loss at step 2000: 10.387121Minibatch accuracy: 78.9%Validation accuracy: 80.8%Minibatch loss at step 2500: 3.324355Minibatch accuracy: 80.5%Validation accuracy: 80.8%Minibatch loss at step 3000: 4.396149Minibatch accuracy: 89.8%Validation accuracy: 81.3%Test accuracy: 88.9%
测试结果正确率达到了88.9%
这样一个简单的神经网络就搭建好了。- [action] deep learning 深度学习 tensorflow 实战(2) 实现简单神经网络以及随机梯度下降算法S.G.D
- (尤其是训练集验证集的生成)深度学习 tensorflow 实战(2) 实现简单神经网络以及随机梯度下降算法S.G.D
- 神经网络与深度学习(2):梯度下降算法和随机梯度下降算法
- [action]tensorflow深度学习实战 (4) 实现简单卷积神经网络
- [action]tensorflow 深度学习实战(1) deep learning 清洗数据
- 神经网络算法学习---梯度下降和随机梯度下降
- 深度学习之(十一)Deep learning中的优化方法:随机梯度下降、受限的BFGS、共轭梯度法
- Deep Learning-TensorFlow (7) CNN卷积神经网络_《Tensorflow:实战Google深度学习框架》
- 神经网络与深度学习笔记(二)python 实现随机梯度下降
- 深度学习进阶(二)--神经网络结构算法以及梯度下降法
- 机器学习--神经网络算法系列--梯度下降与随机梯度下降算法
- 神经网络与深度学习笔记(一)梯度下降算法
- 线性回归、梯度下降以及运用TensorFlow进行简单实现
- Deep learning系列(十)随机梯度下降
- 神经网络与深度学习 笔记2 梯度下降
- 梯度下降算法、随机梯度下降算法scala实现
- 【Deep Learning】tensorflow实现卷积神经网络(AlexNet)
- 随机梯度下降算法
- 设计模式:职责链模式
- 三分_1
- arcgis javascript api中关闭infowindow或者Popup 右上角的 关闭(close)和最大化(Maximize) 按钮
- C# static、 const和readonly区别
- 22. Generate Parentheses QuestionEditorial Solution
- [action] deep learning 深度学习 tensorflow 实战(2) 实现简单神经网络以及随机梯度下降算法S.G.D
- Handle的一些用法
- Android 如何通知用户更新app的版本
- Ubuntu下的文件比较工具--meld
- jquery操作属性
- 算法#09--用简单的思维理解选择、插入、冒泡和希尔排序
- 短url 原理与实现
- wget下载目录下的文件
- LeetCode | Longest Palindromic Substring