莫烦 神经网络RNN例子

来源:互联网 发布:淘宝衍生行业 编辑:程序博客网 时间:2024/06/05 05:49

学了一段时间tensorflow以后,对于动手写代码的能力还是不行,看了下莫烦的视频,记录一下。

是一个利用rnn来处理mnist数据集的例子。

分为三部分:第一部分定义参数,第二部分定义网络,第三部分训练

首先定义参数

import tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets('MNIST_data',one_hot=True)HIDDEN_LAYER = 128BATCH_SIZE = 128INPUTS_NUM = 28STEPS_NUM = 28CLASS_NUM = 10lr = 0.001traning_iter = 100000
x = tf.placeholder(tf.float32,shape=[None,STEPS_NUM,INPUTS_NUM])y = tf.placeholder(tf.float32,shape=[None,CLASS_NUM])weights = {    'in':tf.Variable(tf.random_normal([INPUTS_NUM,HIDDEN_LAYER])),    'out':tf.Variable(tf.random_normal(([HIDDEN_LAYER,CLASS_NUM])))}biases = {    'in':tf.Variable(tf.random_normal([HIDDEN_LAYER,])),    'out':tf.Variable(tf.random_normal(([CLASS_NUM,])))}

注意lr学习率最开始设置为0.1 收敛速度特别慢。还要注意的就是x,y weights biases的维度问题。

第二步定义的就是神经网络模型,最终是要返回一个结果。

def Rnn(X,weights,biases):    X = tf.reshape(X,[BATCH_SIZE*STEPS_NUM,INPUTS_NUM])#?    X_in = tf.matmul(X,weights['in'])+biases['in']
    X_in = tf.reshape(X_in,[-1,STEPS_NUM,HIDDEN_LAYER])#?
##以上为输入层,最开始的X是【128,28,28】,我们需要先转换为[128*28,28]然后和weights【’inx‘】相乘,然后结果再reshape为HiDDenLayer
    cell = tf.nn.rnn_cell.BasicLSTMCell(HIDDEN_LAYER,forget_bias=1.0,state_is_tuple=True)    _init_state = cell.zero_state(BATCH_SIZE,tf.float32)    output,states = tf.nn.dynamic_rnn(cell,X_in,initial_state=_init_state,time_major=False)    result = tf.matmul(states[1],weights['out'])+biases['out']    # result = None    return result

最后将结果返回 再训练

pred = Rnn(x,weights,biases)cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels= y))train_op = tf.train.AdamOptimizer(lr).minimize(cost)correct = tf.equal(tf.argmax(pred,1),tf.argmax(y,1))accuracy = tf.reduce_mean(tf.cast(correct,dtype=tf.float32))init = tf.initialize_all_variables()with tf.Session() as sess:    sess.run(init)    step = 0    while step*BATCH_SIZE<traning_iter:          batch_xs,batch_ys = mnist.train.next_batch(BATCH_SIZE)          # batch_xs = tf.reshape(batch_xs,[BATCH_SIZE,STEPS_NUM,INPUTS_NUM])#??          batch_xs = batch_xs.reshape([BATCH_SIZE,STEPS_NUM,INPUTS_NUM])          sess.run(train_op,feed_dict={x:batch_xs,y:batch_ys})          if step%20 == 0:            print(sess.run(accuracy,feed_dict={x:batch_xs,y:batch_ys}))          step = step+1



产生的问题就是batch_xs最开始是一长串,在将其reshape的过程中,用的tf.shape



完整代码如下


import tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets('MNIST_data',one_hot=True)HIDDEN_LAYER = 128BATCH_SIZE = 128INPUTS_NUM = 28STEPS_NUM = 28CLASS_NUM = 10lr = 0.001traning_iter = 100000x = tf.placeholder(tf.float32,shape=[None,INPUTS_NUM,STEPS_NUM])y = tf.placeholder(tf.float32,shape=[None,CLASS_NUM])weights = {    'in':tf.Variable(tf.random_normal([INPUTS_NUM,HIDDEN_LAYER])),    'out':tf.Variable(tf.random_normal(([HIDDEN_LAYER,CLASS_NUM])))}biases = {    'in':tf.Variable(tf.random_normal([HIDDEN_LAYER,])),    'out':tf.Variable(tf.random_normal(([CLASS_NUM,])))}def Rnn(X,weights,biases):    X = tf.reshape(X,[BATCH_SIZE*STEPS_NUM,INPUTS_NUM])#?    X_in = tf.matmul(X,weights['in'])+biases['in']    X_in = tf.reshape(X_in,[-1,STEPS_NUM,HIDDEN_LAYER])#?    cell = tf.nn.rnn_cell.BasicLSTMCell(HIDDEN_LAYER,forget_bias=1.0,state_is_tuple=True)    _init_state = cell.zero_state(BATCH_SIZE,tf.float32)    output,states = tf.nn.dynamic_rnn(cell,X_in,initial_state=_init_state,time_major=False)    result = tf.matmul(states[1],weights['out'])+biases['out']    # result = None    return resultpred = Rnn(x,weights,biases)cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels= y))train_op = tf.train.AdamOptimizer(lr).minimize(cost)correct = tf.equal(tf.argmax(pred,1),tf.argmax(y,1))accuracy = tf.reduce_mean(tf.cast(correct,dtype=tf.float32))init = tf.initialize_all_variables()with tf.Session() as sess:    sess.run(init)    step = 0    while step*BATCH_SIZE<traning_iter:          batch_xs,batch_ys = mnist.train.next_batch(BATCH_SIZE)          # batch_xs = tf.reshape(batch_xs,[BATCH_SIZE,STEPS_NUM,INPUTS_NUM])#??          batch_xs = batch_xs.reshape([BATCH_SIZE,STEPS_NUM,INPUTS_NUM])          sess.run(train_op,feed_dict={x:batch_xs,y:batch_ys})          if step%20 == 0:            print(sess.run(accuracy,feed_dict={x:batch_xs,y:batch_ys}))          step = step+1