Tensorflow学习: RNN-LSTM应用于MNIST数据分类
来源:互联网 发布:淘宝店铺怎么上众划算 编辑:程序博客网 时间:2024/05/17 22:44
本文内容:
1. RNN
2. 转自周莫烦的youtube视频,代码为原作者的Github
# View more python learning tutorial on my Youtube and Youku channel!!!# Youtube video tutorial: https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg# Youku video tutorial: http://i.youku.com/pythontutorial"""This code is a modified version of the code from this link:https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.pyHis code is a very good one for RNN beginners. Feel free to check it out."""import tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_data# set random seed for comparing the two result calculationstf.set_random_seed(1)# this is datamnist = input_data.read_data_sets('MNIST_data', one_hot=True)# hyperparameterslr = 0.001training_iters = 100000batch_size = 128n_inputs = 28 # MNIST data input (img shape: 28*28)n_steps = 28 # time stepsn_hidden_units = 128 # neurons in hidden layern_classes = 10 # MNIST classes (0-9 digits)# tf Graph inputx = tf.placeholder(tf.float32, [None, n_steps, n_inputs])y = tf.placeholder(tf.float32, [None, n_classes])# Define weightsweights = { # (28, 128) 'in': tf.Variable(tf.random_normal([n_inputs, n_hidden_units])), # (128, 10) 'out': tf.Variable(tf.random_normal([n_hidden_units, n_classes]))}biases = { # (128, ) 'in': tf.Variable(tf.constant(0.1, shape=[n_hidden_units, ])), # (10, ) 'out': tf.Variable(tf.constant(0.1, shape=[n_classes, ]))}def RNN(X, weights, biases): # hidden layer for input to cell ######################################## # transpose the inputs shape from # X ==> (128 batch * 28 steps, 28 inputs) X = tf.reshape(X, [-1, n_inputs]) # into hidden # X_in = (128 batch * 28 steps, 128 hidden) X_in = tf.matmul(X, weights['in']) + biases['in'] # X_in ==> (128 batch, 28 steps, 128 hidden) X_in = tf.reshape(X_in, [-1, n_steps, n_hidden_units]) # cell ########################################## # basic LSTM Cell. if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1: cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden_units, forget_bias=1.0, state_is_tuple=True) else: cell = tf.contrib.rnn.BasicLSTMCell(n_hidden_units) # lstm cell is divided into two parts (c_state, h_state) init_state = cell.zero_state(batch_size, dtype=tf.float32) # You have 2 options for following step. # 1: tf.nn.rnn(cell, inputs); # 2: tf.nn.dynamic_rnn(cell, inputs). # If use option 1, you have to modified the shape of X_in, go and check out this: # https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py # In here, we go for option 2. # dynamic_rnn receive Tensor (batch, steps, inputs) or (steps, batch, inputs) as X_in. # Make sure the time_major is changed accordingly. outputs, final_state = tf.nn.dynamic_rnn(cell, X_in, initial_state=init_state, time_major=False) # hidden layer for output as the final results ############################################# # results = tf.matmul(final_state[1], weights['out']) + biases['out'] # # or # unpack to list [(batch, outputs)..] * steps if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1: outputs = tf.unpack(tf.transpose(outputs, [1, 0, 2])) # states is the last outputs else: outputs = tf.unstack(tf.transpose(outputs, [1,0,2])) results = tf.matmul(outputs[-1], weights['out']) + biases['out'] # shape = (128, 10) return resultspred = RNN(x, weights, biases)cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))train_op = tf.train.AdamOptimizer(lr).minimize(cost)correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))with tf.Session() as sess: # tf.initialize_all_variables() no long valid from # 2017-03-02 if using tensorflow >= 0.12 if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1: init = tf.initialize_all_variables() else: init = tf.global_variables_initializer() sess.run(init) step = 0 while step * batch_size < training_iters: batch_xs, batch_ys = mnist.train.next_batch(batch_size) batch_xs = batch_xs.reshape([batch_size, n_steps, n_inputs]) sess.run([train_op], feed_dict={ x: batch_xs, y: batch_ys, }) if step % 20 == 0: print(sess.run(accuracy, feed_dict={ x: batch_xs, y: batch_ys, })) step += 1
0 0
- Tensorflow学习: RNN-LSTM应用于MNIST数据分类
- TensorFlow MNIST RNN LSTM
- Tensorflow-rnn(mnist分类)
- Tensorflow学习笔记--MNIST LSTM分类器代码
- 深度学习框架TensorFlow学习与应用(六)——卷积神经网络应用于MNIST数据集分类
- Tensorflow #2 深度学习-RNN LSTM版 MNIST手写识别Demo
- tensorflow 学习笔记12 循环神经网络RNN LSTM结构实现MNIST手写识别
- tensorflow实战1:lstm实现mnist分类
- tensorflow利用RNN和双向RNN实现MNIST分类问题
- RNN实践一:LSTM实现MNIST数字分类
- 学习Tensorflow的LSTM的RNN例子
- Tensorflow-LSTM RNN 例子
- RNN(LSTM)用于分类
- TensorFlow 实现多层 LSTM 的 MNIST 分类 + 可视化
- 学习笔记TF056:TensorFlow MNIST,数据集、分类、可视化
- Tensorflow实战学习(十七)【自然语言处理、RNN、LSTM】
- Tensorflow: RNN/LSTM gradient clipping
- tensorflow RNN LSTM语言模型
- 四连通与八连通
- yii查询相关
- 坚持#第178天~录屏是个双倍神符,学知识超快,能够提升自己、别忘了学以致用,有什么好害羞的!
- 图的储存之邻接矩阵
- hdu5971 Wrestling Match(染色法判二分) 2016ACM/ICPC亚洲区大连站
- Tensorflow学习: RNN-LSTM应用于MNIST数据分类
- 《Cracking the Coding Interview程序员面试金典》----下一个元素(下一个比他大的)
- HDU
- fasterxml如何对Java枚举类型做类型转换
- httpclientpost网络获取数据
- [Android] Android Studioの打包问题
- rpm,diff,patch,grep,cut,sort,tr
- QT小知识点(3) -Qt Creator 使用技巧之提高编译速度【使用jom参数】
- Windows中hosts文件的位置