[TensorFlow]入门学习笔记(6)-Tensorboard简易教程和模型保存
来源:互联网 发布:java中的this 编辑:程序博客网 时间:2024/06/08 10:23
模型保存
tf.train.Saver()
The Saver class adds ops to save and restore variables to and from checkpoints. It also provides convenience methods to run these ops.
两个重要的函数。
一个是saver.save() 将某个session中的模型和参数都保存在save-path,并且后面是迭代次数。
而对于restrore()函数,我认为理解恢复操作的最好方法是将它简单的看做是一种数据初始化操作,就是讲之前的session中的数据完整的init出来,在当前的session中。
# -*- coding: UTF-8 -*from tensorflow.examples.tutorials.mnist import input_dataimport tensorflow as tfmnist = input_data.read_data_sets("MNIST_data/",one_hot=True)learning_rate = 0.001batch_size = 100display_step = 1model_path = "../tmp/model.ckpt"n_hidden_1 = 256n_hidden_2 = 256n_input = 784n_classes = 10x = tf.placeholder(tf.float32,[None,n_input])y = tf.placeholder(tf.float32,[None,n_classes])weights = { 'h1':tf.Variable(tf.random_normal([n_input,n_hidden_1])), 'h2':tf.Variable(tf.random_normal([n_hidden_1,n_hidden_2])), 'out':tf.Variable(tf.random_normal([n_hidden_2,n_classes]))}biases = { 'b1':tf.Variable(tf.random_normal([n_hidden_1])), 'b2':tf.Variable(tf.random_normal([n_hidden_2])), 'out':tf.Variable(tf.random_normal([n_classes]))}#构建模型def multilayer_preceptron(x,weights,biases): #hidden 1 with relu activation layer_1 = tf.add(tf.matmul(x,weights['h1']),biases['b1']) layer_1 = tf.nn.relu(layer_1) #hidden 2 with relu activation layer_2 = tf.add(tf.matmul(layer_1,weights['h2']),biases['b2']) layer_2 = tf.nn.relu(layer_2) #output layer with linear activation out_layer = tf.matmul(layer_2,weights['out'])+biases['out'] return out_layerpred = multilayer_preceptron(x,weights,biases)cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels=y))optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)init = tf.global_variables_initializer()saver = tf.train.Saver()print "Starting 1st session..."if __name__ == '__main__': with tf.Session() as sess: #init variables sess.run(init) for epoch in range(3): avg_cost = 0 total_batch = int(mnist.train.num_examples/batch_size) #loop for i in range(total_batch): batch_x,batch_y = mnist.train.next_batch(batch_size) _,c = sess.run([optimizer,cost],feed_dict={ x:batch_x, y:batch_y }) avg_cost += c/total_batch if epoch % display_step == 0: print "Epoch:", '%04d' % (epoch + 1), "cost=", \ "{:.9f}".format(avg_cost) print "First Optimization Finished!" # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}) # Save model weights to disk save_path = saver.save(sess, model_path) print "Model saved in file: %s" % save_path #running a new session.. with tf.Session() as sess: sess.run(init) #理解恢复操作的最好方法是将它简单的看做是一种数据初始化操作 load_path = saver.restore(sess,model_path) print "Model restored from file:%s"%save_path for epoch in range(7): avg_cost = 0 total_batch = int(mnist.train.num_examples/batch_size) #loop for i in range(total_batch): batch_x,batch_y = mnist.train.next_batch(batch_size) _,c = sess.run([optimizer,cost],feed_dict={ x:batch_x, y:batch_y }) avg_cost += c/total_batch if epoch % display_step == 0: print "Epoch:", '%04d' % (epoch + 1), "cost=", \ "{:.9f}".format(avg_cost) print "Second Optimization Finished!" # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval( {x: mnist.test.images, y: mnist.test.labels})
TensorBoard
tf.summary.scalar() 将记录要显示的变量,在tensorboard中显示,所有的summary也相当于op,定义完scalar后,将他们merge所有的op为一个组合。
在session函数迭代里面,run()出函数。
summary_writer = tf.summary.FileWriter(logs_path,graph=tf.get_default_graph())
写函数将所有的参数保存在log中,便于我们调用。
然后在迭代里面讲当前的summary op ,add进写文件。
最后,在终端里面,tensorboard –logdit=”“
打开http://127.0.0.0:6006/ into your web browser
basic model
# -*- coding: UTF-8 -*import tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("MNIST_data/",one_hot=True)learning_rate = 0.01training_epochs = 25batch_size = 100display_step = 1logs_path = '../tmp/tensorflow_logs/example'x = tf.placeholder(tf.float32,[None,784],name='InputData')y = tf.placeholder(tf.float32,[None,10],name='LabelData')w = tf.Variable(tf.zeros([784,10]),name='Weights')b = tf.Variable(tf.zeros([10]),name='Bias')with tf.name_scope('Model'): pred = tf.nn.softmax(tf.matmul(x,w)+b)with tf.name_scope('Loss'): cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))with tf.name_scope('SGD'): optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)with tf.name_scope('Accuracy'): acc = tf.equal(tf.argmax(pred,1),tf.argmax(y,1)) acc = tf.reduce_mean(tf.cast(acc,tf.float32))init = tf.global_variables_initializer()tf.summary.scalar("loss",cost)tf.summary.scalar("accuracy",acc)merged_summary_op = tf.summary.merge_all()with tf.Session() as sess: sess.run(init) summary_writer = tf.summary.FileWriter(logs_path,graph=tf.get_default_graph()) for epoch in range(training_epochs): avg_cost = 0 total_batch = int(mnist.train.num_examples/batch_size) #loop for i in range(total_batch): batch_xs,batch_ys = mnist.train.next_batch(batch_size) _,c,summary = sess.run([optimizer,cost,merged_summary_op], feed_dict={x:batch_xs,y:batch_ys}) summary_writer.add_summary(summary,(epoch)*total_batch+i) avg_cost+=c/total_batch if (epoch + 1) % display_step == 0: print "Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost) print "Optimization Finished!" # Test model # Calculate accuracy print "Accuracy:", acc.eval({x: mnist.test.images, y: mnist.test.labels}) print "Run the command line:\n" \ "--> tensorboard --logdir=/tmp/tensorflow_logs " \ "\nThen open http://127.0.0.0:6006/ into your web browser"
升级版的Tensorboard
# -*- coding: UTF-8 -*import tensorflow as tf# Import MNIST datafrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("MNIST_data/", one_hot=True)# Parameterslearning_rate = 0.01training_epochs = 10batch_size = 100display_step = 1logs_path = '../tmp/tensorflow_logs/example2'# Network Parametersn_hidden_1 = 20 # 1st layer number of featuresn_hidden_2 = 40 # 2nd layer number of featuresn_input = 784 # MNIST data input (img shape: 28*28)n_classes = 10 # MNIST total classes (0-9 digits)# tf Graph Input# mnist data image of shape 28*28=784x = tf.placeholder(tf.float32, [None, 784], name='InputData')# 0-9 digits recognition => 10 classesy = tf.placeholder(tf.float32, [None, 10], name='LabelData')#使用tf.summary.scalar记录标量# 使用tf.summary.histogram记录数据的直方图# 使用tf.summary.distribution记录数据的分布图# 使用tf.summary.image记录图像数据# Create modeldef multilayer_perceptron(x, weights, biases): # Hidden layer with RELU activation layer_1 = tf.add(tf.matmul(x, weights['w1']), biases['b1']) layer_1 = tf.nn.relu(layer_1) # Create a summary to visualize the first layer ReLU activation tf.summary.histogram("relu1", layer_1) # Hidden layer with RELU activation layer_2 = tf.add(tf.matmul(layer_1, weights['w2']), biases['b2']) layer_2 = tf.nn.relu(layer_2) # Create another summary to visualize the second layer ReLU activation tf.summary.histogram("relu2", layer_2) # Output layer out_layer = tf.add(tf.matmul(layer_2, weights['w3']), biases['b3']) return out_layer# Store layers weight & biasweights = { 'w1': tf.Variable(tf.random_normal([n_input, n_hidden_1]), name='W1'), 'w2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]), name='W2'), 'w3': tf.Variable(tf.random_normal([n_hidden_2, n_classes]), name='W3')}biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1]), name='b1'), 'b2': tf.Variable(tf.random_normal([n_hidden_2]), name='b2'), 'b3': tf.Variable(tf.random_normal([n_classes]), name='b3')}# Encapsulating all ops into scopes, making Tensorboard's Graph# Visualization more convenientwith tf.name_scope('Model'): # Build model pred = multilayer_perceptron(x, weights, biases)with tf.name_scope('Loss'): # Softmax Cross entropy (cost function) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))with tf.name_scope('SGD'): # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate) # Op to calculate every variable gradient grads = tf.gradients(loss, tf.trainable_variables()) grads = list(zip(grads, tf.trainable_variables())) # Op to update all variables according to their gradient apply_grads = optimizer.apply_gradients(grads_and_vars=grads)with tf.name_scope('Accuracy'): # Accuracy acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) acc = tf.reduce_mean(tf.cast(acc, tf.float32))# Initializing the variablesinit = tf.global_variables_initializer()# Create a summary to monitor cost tensortf.summary.scalar("loss", loss)# Create a summary to monitor accuracy tensortf.summary.scalar("accuracy", acc)# Create summaries to visualize weightsfor var in tf.trainable_variables(): tf.summary.histogram(var.name, var)# Summarize all gradientsfor grad, var in grads: tf.summary.histogram(var.name + '/gradient', grad)# Merge all summaries into a single opmerged_summary_op = tf.summary.merge_all()# Launch the graphwith tf.Session() as sess: sess.run(init) # op to write logs to Tensorboard summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph()) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Run optimization op (backprop), cost op (to get loss value) # and summary nodes _, c, summary = sess.run([apply_grads, loss, merged_summary_op], feed_dict={x: batch_xs, y: batch_ys}) # Write logs at every iteration summary_writer.add_summary(summary, epoch * total_batch + i) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if (epoch+1) % display_step == 0: print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost)) print("Optimization Finished!") # Test model # Calculate accuracy print("Accuracy:", acc.eval({x: mnist.test.images, y: mnist.test.labels})) print("Run the command line:\n" \ "--> tensorboard --logdir=/tmp/tensorflow_logs " \ "\nThen open http://0.0.0.0:6006/ into your web browser")
阅读全文
0 0
- [TensorFlow]入门学习笔记(6)-Tensorboard简易教程和模型保存
- TensorFlow深度学习笔记 Tensorboard入门
- tensorflow学习笔记六:保存和加载训练模型
- TensorFlow学习笔记--网络模型的保存和读取
- tensorflow学习(3):解读mnist_experts例子,训练保存模型并tensorboard可视化
- Tensorflow学习笔记--模型保存与调取
- Tensorflow学习笔记-模型保存与加载
- TensorFlow-6-TensorBoard 可视化学习
- tensorflow 学习笔记6 TensorBoard可视化神经网络过程
- TensorFlow学习笔记10——TensorFlow保存和调用模型遇到的问题
- tensorflow学习笔记(四):TensorBoard
- tensorflow学习笔记(七):tensorboard可视化
- TensorFlow极简教程:创建、保存和恢复机器学习模型
- TensorFlow学习笔记(四):Tensorflow网络构建和TensorBoard进行训练过程可视化
- tensorflow入门6 tensorboard的使用
- tensorflow入门Day3-TensorBoard
- TensorFlow学习笔记(8)--网络模型的保存和读取
- TensorFlow学习笔记(2)——保存和加载训练模型参数
- Java总结输入流输出流
- 用VBA在EXCEL中实现九九乘法表制作,并加上边框
- 【干货】:怎么让元素水平排列
- VC运行时库(/MD、/MT等)
- SQL Server 中关于 @@error 的一个小误区
- [TensorFlow]入门学习笔记(6)-Tensorboard简易教程和模型保存
- java5、java6、java7、java8的新特性
- 大数据存储系统(5)--- ZooKeeper
- Java NIO:浅析I/O模型
- 到底什么是生成式对抗网络GAN?
- 1129
- QT 进程间通信之古老的方法(内存共享)
- Java导出Excel表,POI实现自适应宽度
- C#WinFrom开发系列之关于动态添加生成和删除控件的相关知识