学习TensorFlow,保存学习到的网络结构参数并调用
来源:互联网 发布:网络维护专员是干嘛的 编辑:程序博客网 时间:2024/04/27 13:08
在深度学习中,不管使用那种学习框架,我们会遇到一个很重要的问题,那就是在训练完之后,如何存储学习到的深度网络的参数?在测试时,如何调用这些网络参数?针对这两个问题,本篇博文主要探索TensorFlow如何解决他们?本篇博文分为三个部分,第一是讲解tensorflow相关的函数,第二是代码例程,第三是运行结果。
一 tensorflow相关的函数
我们说的这两个功能主要由一个类来完成,class tf.train.Saver
saver = tf.train.Saver()save_path = saver.save(sess, model_path)load_path = saver.restore(sess, model_path)saver = tf.train.Saver() 由类创建对象saver,用于保存和调用学习到的网络参数,参数保存在checkpoints里
save_path = saver.save(sess, model_path) 保存学习到的网络参数到model_path路径中
load_path = saver.restore(sess, model_path) 调用model_path路径中的保存的网络参数到graph中
二 代码例程
'''Save and Restore a model using TensorFlow.This example is using the MNIST database of handwritten digits(http://yann.lecun.com/exdb/mnist/)Author: Aymeric DamienProject: https://github.com/aymericdamien/TensorFlow-Examples/'''# Import MINST datafrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("/tmp/data/", one_hot=True)import tensorflow as tf# Parameterslearning_rate = 0.001batch_size = 100display_step = 1model_path = "/home/lei/TensorFlow-Examples-master/examples/4_Utils/model.ckpt"# Network Parametersn_hidden_1 = 256 # 1st layer number of featuresn_hidden_2 = 256 # 2nd layer number of featuresn_input = 784 # MNIST data input (img shape: 28*28)n_classes = 10 # MNIST total classes (0-9 digits)# tf Graph inputx = tf.placeholder("float", [None, n_input])y = tf.placeholder("float", [None, n_classes])# Create modeldef multilayer_perceptron(x, weights, biases): # Hidden layer with RELU activation layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) layer_1 = tf.nn.relu(layer_1) # Hidden layer with RELU activation layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) layer_2 = tf.nn.relu(layer_2) # Output layer with linear activation out_layer = tf.matmul(layer_2, weights['out']) + biases['out'] return out_layer# Store layers weight & biasweights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))}biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_classes]))}# Construct modelpred = multilayer_perceptron(x, weights, biases)# Define loss and optimizercost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)# Initializing the variablesinit = tf.initialize_all_variables()# 'Saver' op to save and restore all the variablessaver = tf.train.Saver()# Running first sessionprint "Starting 1st session..."with tf.Session() as sess: # Initialize variables sess.run(init) # Training cycle for epoch in range(3): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_x, batch_y = mnist.train.next_batch(batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if epoch % display_step == 0: print "Epoch:", '%04d' % (epoch+1), "cost=", \ "{:.9f}".format(avg_cost) print "First Optimization Finished!" # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}) # Save model weights to disk save_path = saver.save(sess, model_path) print "Model saved in file: %s" % save_path# Running a new sessionprint "Starting 2nd session..."with tf.Session() as sess: # Initialize variables sess.run(init) # Restore model weights from previously saved model load_path = saver.restore(sess, model_path) print "Model restored from file: %s" % save_path # Resume training for epoch in range(7): avg_cost = 0. total_batch = int(mnist.train.num_examples / batch_size) # Loop over all batches for i in range(total_batch): batch_x, batch_y = mnist.train.next_batch(batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if epoch % display_step == 0: print "Epoch:", '%04d' % (epoch + 1), "cost=", \ "{:.9f}".format(avg_cost) print "Second Optimization Finished!" # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval( {x: mnist.test.images, y: mnist.test.labels})
参考资料:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/4_Utils/save_restore_model.py
https://www.tensorflow.org/versions/r0.9/api_docs/python/state_ops.html#Saver
2 0
- 学习TensorFlow,保存学习到的网络结构参数并调用
- 学习TensorFlow,保存学习到的网络结构参数并调用
- 学习TensorFlow,保存学习到的网络结构参数并调用
- 学习TensorFlow,TensorBoard可视化网络结构和参数
- 【深度学习】tensorflow加载VGG16的网络结构和模型参数
- TensorFlow学习笔记--网络模型的保存和读取
- tensorflow 学习笔记10 网络模型的保存与提取
- tensorflow学习(4):保存模型Saver.save()的参数命名机制以及restore并创建手写字体识别引擎
- Tensorflow学习: 保存变量和网络
- 【Python】Tensorflow获取变量,保存网络参数到Matlab格式
- Tensorflow学习笔记(5)-网络结构的构建
- TensorFlow学习笔记10——TensorFlow保存和调用模型遇到的问题
- #TensorFlow 学习笔记# 01 Start 一个完整的TensorFlow网络结构
- TensorFlow学习笔记(8)--网络模型的保存和读取
- tensorflow保存网络参数 使用训练好的网络参数进行数据的预测
- 学习TensorFlow,调用预训练好的网络(Alex, VGG, ResNet etc)
- Android学习之调用系统相机拍照、截图并保存
- 博弈论的一些基础知识(参考网络资料,学习总结,很好,分享并保存)
- jsonp跨域请求遇到的一点错误
- IEEE 802.15.4协议中超帧简介
- 数据库引擎
- Activity的LaunchMode和taskAffinity
- Eclipse checkstyle 插件使用简单介绍
- 学习TensorFlow,保存学习到的网络结构参数并调用
- 171. Excel Sheet Column Number
- 仿天猫热点,淘宝头条向上自动滚动的textview
- 三大范式
- char *c = "abc"和char c[]="abc"
- Spring通过构造方法注入的四种方式
- JS实现复制数据到剪贴板,zeroclipboard库
- insert into ……select from的用法及容易造成的错误
- tableview上cell 加载collectionview