tensorflow初学之SGD
来源:互联网 发布:python自然语言分析 编辑:程序博客网 时间:2024/06/02 06:55
在开始本任务之前,确保已经完成了之前的notMNIST的步骤,点击查看notMNIST
提示:训练随机梯度下降(SGD)花费的时间应给明显少于简单的梯度下降(GD).
1.检查包
首先,检查本次学习要用到的包,确保都已经正确导入,输入以下代码,点击“run cell”,运行不报错即可
#在学习之前确保以下包已经正确导入from __future__ import print_functionimport numpy as npimport tensorflow as tffrom six.moves import cPickle as picklefrom six.moves import range
2.导入pickle
导入之前生成的notMNIST.pickle文件(链接在本文开头)
pickle_file = 'notMNIST.pickle'with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape)
结果如下:
Training set (200000, 28, 28) (200000,)
Validation set (10000, 28, 28) (10000,)
Test set (18724, 28, 28) (18724,)
3.数据处理
将数据重新处理为更适合接下来模型学习的格式:
数据作为平面矩阵;标签用One-Hot编码处理.
image_size = 28num_labels = 10def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labelstrain_dataset, train_labels = reformat(train_dataset, train_labels)valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)test_dataset, test_labels = reformat(test_dataset, test_labels)print('Training set', train_dataset.shape, train_labels.shape)print('Validation set', valid_dataset.shape, valid_labels.shape)print('Test set', test_dataset.shape, test_labels.shape)
结果:
Training set (200000, 784) (200000, 10)Validation set (10000, 784) (10000, 10)Test set (10000, 784) (10000, 10)
4.简单的梯度下降(GD)
我们首先使用简单的梯度下降训练多项Logistic回归。
tensorflow工作流程:
首先描述你想要在输入,变量和操作上执行的计算方式,它们通过计算图的方式创建为一个节点。此描述全部包含在以下内容中:
with graph.as_default():
然后可以通过session.run(),在计算图上运行多次你想要的操作,从返回的图中提取提供输出。此运行时操作全部包含在以下块中:
with tf.Session(graph=graph) as session:
我们将所有数据加载到TensorFlow中,构建对应于我们培训的计算图:
# With gradient descent training, even this much data is prohibitive.# Subset the training data for faster turnaround.train_subset = 10000graph = tf.Graph()with graph.as_default(): # Input data. # Load the training, validation and test data into constants that are # attached to the graph. tf_train_dataset = tf.constant(train_dataset[:train_subset, :]) tf_train_labels = tf.constant(train_labels[:train_subset]) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. # These are the parameters that we are going to be training. The weight # matrix will be initialized using random values following a (truncated) # normal distribution. The biases get initialized to zero. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. # We multiply the inputs with the weight matrix, and add biases. We compute # the softmax and cross-entropy (it's one operation in TensorFlow, because # it's very common, and it can be optimized). We take the average of this # cross-entropy across all training examples: that's our loss. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits)) # Optimizer. # We are going to find the minimum of this loss using gradient descent. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. # These are not part of training, but merely here so that we can report # accuracy figures as we train. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
我们来运行这个计算并迭代:
num_steps = 801def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0])with tf.Session(graph=graph) as session: # This is a one-time operation which ensures the parameters get initialized as # we described in the graph: random weights for the matrix, zeros for the # biases. tf.global_variables_initializer().run() print('Initialized') for step in range(num_steps): # Run the computations. We tell .run() that we want to run the optimizer, # and get the loss value and the training predictions returned as numpy # arrays. _, l, predictions = session.run([optimizer, loss, train_prediction]) if (step % 100 == 0): print('Loss at step %d: %f' % (step, l)) print('Training accuracy: %.1f%%' % accuracy( predictions, train_labels[:train_subset, :])) # Calling .eval() on valid_prediction is basically like calling run(), but # just to get that one numpy array. Note that it recomputes all its graph # dependencies. print('Validation accuracy: %.1f%%' % accuracy( valid_prediction.eval(), valid_labels)) print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
结果:
Initialized
Loss at step 0: 16.442284
Training accuracy: 7.8%
Validation accuracy: 11.4%
Loss at step 100: 2.226995
Training accuracy: 72.3%
Validation accuracy: 70.9%
Loss at step 200: 1.799694
Training accuracy: 75.2%
Validation accuracy: 73.5%
Loss at step 300: 1.574350
Training accuracy: 76.3%
Validation accuracy: 74.3%
Loss at step 400: 1.420926
Training accuracy: 77.2%
Validation accuracy: 74.9%
Loss at step 500: 1.305450
Training accuracy: 77.9%
Validation accuracy: 75.2%
Loss at step 600: 1.214321
Training accuracy: 78.5%
Validation accuracy: 75.4%
Loss at step 700: 1.140065
Training accuracy: 78.9%
Validation accuracy: 75.3%
Loss at step 800: 1.078110
Training accuracy: 79.4%
Validation accuracy: 75.5%
Test accuracy: 82.9%
5.随机梯度下降(SGD)
现在我们来改用随机梯度下降训练,这样快得多。
计算图是类似的,除了不是将所有训练数据保存到恒定节点中,我们创建了一个占位符节点,每次通过session.run()来馈送实际数据.
batch_size = 128graph = tf.Graph()with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
然后运行下列程序:
num_steps = 3001with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
结果如下:
Initialized
Minibatch loss at step 0: 17.488043
Minibatch accuracy: 11.7%
Validation accuracy: 12.7%
Minibatch loss at step 500: 1.099625
Minibatch accuracy: 79.7%
Validation accuracy: 75.5%
Minibatch loss at step 1000: 1.522583
Minibatch accuracy: 76.6%
Validation accuracy: 76.4%
Minibatch loss at step 1500: 0.659283
Minibatch accuracy: 84.4%
Validation accuracy: 76.4%
Minibatch loss at step 2000: 0.849694
Minibatch accuracy: 84.4%
Validation accuracy: 77.2%
Minibatch loss at step 2500: 1.101751
Minibatch accuracy: 75.0%
Validation accuracy: 78.0%
Minibatch loss at step 3000: 1.034247
Minibatch accuracy: 78.1%
Validation accuracy: 78.4%
Test accuracy: 86.8%
这里可以很明显的发现:训练随机梯度下降(SGD)完成的很快,所花费的时间明显少于之前的简单的梯度下降(GD).
6.简单的多层神经网络
将具有SGD的逻辑回归示例,通过修正线性单元(Relu) nn.relu()和1024个隐藏节点,将其转换为含有一个隐藏层的多层神经网络。
该模型会提高验证/测试准确性。
在这之前,可以先看如下小例子:
# Solution is available in the other "solution.py" tabimport tensorflow as tfoutput = Nonehidden_layer_weights = [ [0.1, 0.2, 0.4], [0.4, 0.6, 0.6], [0.5, 0.9, 0.1], [0.8, 0.2, 0.8]]out_weights = [ [0.1, 0.6], [0.2, 0.1], [0.7, 0.9]]# Weights and biasesweights = [ tf.Variable(hidden_layer_weights), tf.Variable(out_weights)]biases = [ tf.Variable(tf.zeros(3)), tf.Variable(tf.zeros(2))]# Inputfeatures = tf.Variable([[1.0, 2.0, 3.0, 4.0], [-1.0, -2.0, -3.0, -4.0], [11.0, 12.0, 13.0, 14.0]])# TODO: Create Modelhidden_layer = tf.add(tf.matmul(features, weights[0]), biases[0])hidden_layer = tf.nn.relu(hidden_layer)logits = tf.add(tf.matmul(hidden_layer, weights[1]), biases[1])# TODO: Print session resultswith tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(logits))
结果:
[[ 5.11000013 8.44000053]
[ 0. 0. ]
[ 24.01000214 38.23999786]]
接下来,再进行初始的任务,加入一层隐藏层和1024个节点:
batch_size = 128hiden_layer_node_num = 1024graph = tf.Graph()with graph.as_default(): # input -----------------------------------------1 tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. ------------------------------------------2 weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, hiden_layer_node_num])) biases1 = tf.Variable(tf.zeros([hiden_layer_node_num])) # input layer output (batch_size, hiden_layer_node_num) weights2 = tf.Variable(tf.truncated_normal([hiden_layer_node_num, num_labels])) biases2 = tf.Variable(tf.zeros([num_labels])) # Training computation. ------------------------------------------3 logits = tf.matmul(tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1), weights2) + biases2 loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. -------------------------------------------4 optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. --------------------------------------------5 train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1), weights2) + biases2) test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1), weights2) + biases2)num_steps = 3001with tf.Session(graph=graph) as session: #注意initialize_all_variables()已被global_variables_initializer()代替 #tf.initialize_all_variables().run() tf.global_variables_initializer().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. # 传递值到tf的命名空间 feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
结果如下:
Initialized
Minibatch loss at step 0: 336.547974
Minibatch accuracy: 6.2%
Validation accuracy: 29.0%
Minibatch loss at step 500: 17.726200
Minibatch accuracy: 84.4%
Validation accuracy: 80.2%
Minibatch loss at step 1000: 15.383211
Minibatch accuracy: 75.0%
Validation accuracy: 81.4%
Minibatch loss at step 1500: 5.096069
Minibatch accuracy: 87.5%
Validation accuracy: 80.3%
Minibatch loss at step 2000: 3.049689
Minibatch accuracy: 82.0%
Validation accuracy: 81.2%
Minibatch loss at step 2500: 3.715914
Minibatch accuracy: 79.7%
Validation accuracy: 82.4%
Minibatch loss at step 3000: 1.516045
Minibatch accuracy: 78.9%
Validation accuracy: 82.8%
Test accuracy: 89.7%
- tensorflow初学之SGD
- tensorflow初学之MNIST
- tensorflow初学之notMNIST
- 初学tensorflow
- 初学tensorflow
- [TensorFlow]:初学TensorFlow
- 初学tensorflow一
- 初学Tensorflow分布式
- TensorFlow初学(一)
- TensorFlow初学(二)
- boost库之error LINK1104 sgd(转)
- 机器学习之logistic regression(SGD)
- Deep Learning 最优化方法之SGD
- 【深度学习】caffe之SGD solver
- 初学tensorflow遇到的问题
- 初学 Tensorflow (构造神经网络)
- 初学TensorFlow的一些体会
- tensorflow实现最基本的神经网络 + 对比GD、SGD、batch-GD的训练方法
- Unity基础.003MonoBehavior常用事件函数
- 对向量,矩阵,张量求导
- 上楼梯(动态规划)
- 英文单词统计程序
- iptables
- tensorflow初学之SGD
- LeetCode-394. Decode String (JAVA)解码字符串
- Item39 Consider void futures for one-shot event communication
- Intent和IntentFilter
- 设计模式12-断路器模式
- Python生成二维码
- POJ3090
- tensorflow错误记录:DeprecationWarning: elementwise
- Logback浅析