tensorflow 实现rnn
来源:互联网 发布:帝国cms快速仿站 编辑:程序博客网 时间:2024/06/05 10:10
本文主要用于讨论官方自带的一个例子,rnn,用于多对一的情况,也就是用于图像分类,
先看下tensorflow的版本,下面在代码低版本可能运行不了:
In [1]: import tensorflow as tf
In [2]: tf.__version__
Out[2]: '1.1.0-rc0'
代码:
import tensorflow as tf
from tensorflow.contrib import rnn
from tensorflow.examples.tutorials.mnist import input_data
mnist=input_data.read_data_sets("../MNIST_data",one_hot=True)
lr=0.001
training_iters=1000000
batch_size=128
n_inputs=28
n_step=28
n_hidden_unis=128
n_classes=10
x=tf.placeholder(tf.float32,[None,n_step,n_inputs])
y=tf.placeholder(tf.float32,[None,n_classes])
weights={
'in':tf.Variable(tf.random_normal([n_inputs,n_hidden_unis])),
'out':tf.Variable(tf.random_normal([n_hidden_unis,n_classes]))
}
biases={
'in':tf.Variable(tf.constant(0.1,shape=[n_hidden_unis,])),
'out':tf.Variable(tf.constant(0.1,shape=[n_classes,]))
}
def RNN(_X,weights,biases):
_X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
_X = tf.reshape(_X, [-1, n_inputs]) # (n_steps*batch_size, n_input)
_X = tf.matmul(_X, weights['in']) + biases['in']
lstm_cell = rnn.BasicLSTMCell(n_hidden_unis, forget_bias=1.0)
_init_state=lstm_cell.zero_state(batch_size,dtype=tf.float32)
_X = tf.split(_X, n_step,0 ) # n_steps * (batch_size, n_hidden)
outputs, states = rnn.static_rnn(lstm_cell, _X, initial_state=_init_state)
# Get inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
pred=RNN(x,weights,biases)
cost=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels=y))
train_op=tf.train.AdagradOptimizer(lr).minimize(cost)
correct_pred=tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
accuracy=tf.reduce_mean(tf.cast(correct_pred,tf.float32))
init=tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
step=0
while step*batch_size<training_iters:
batch_xs,batch_ys=mnist.train.next_batch(batch_size)
batch_xs=batch_xs.reshape([batch_size,n_step,n_inputs])
sess.run(train_op,feed_dict={
x:batch_xs,
y:batch_ys
})
if step%20==0:
print(sess.run(accuracy,feed_dict={
x: batch_xs,
y: batch_ys
}))
step+=1
结果:
0.921875
0.953125
0.96875
0.953125
0.960938
0.976562
0.96875
0.960938
0.960938
0.945312
0.960938
阅读全文
0 0
- tensorflow 实现rnn
- tensorflow Examples:<4>实现RNN
- Tensorflow RNN源代码解析笔记2:RNN的基本实现
- 基于tensorflow的RNN-LSTM(一)实现RNN
- tensorflow利用RNN和双向RNN实现MNIST分类问题
- 使用TensorFlow实现RNN模型入门篇
- 使用TensorFlow动手实现一个Char-RNN
- tensorflow rnn 最简单实现代码
- RNN-循环神经网络-02Tensorflow中的实现
- RNN的原理与TensorFlow代码实现
- tensorflow学习——简单RNN实现
- Tensorflow学习笔记--RNN精要及代码实现
- Tensorflow RNN源代码解析笔记1:RNNCell的基本实现
- 使用TensorFlow实现RNN模型入门篇1
- RNN入门详解及TensorFlow源码实现--深度学习笔记
- TensorFlow中RNN网络的实现和关键参数选择
- TensorFlow中RNN实现的正确打开方式
- RNN-LSTM循环神经网络-03Tensorflow进阶实现
- Spring Boot 不允许加载iframe问题解决
- HDU 6069 求区间[L,R]每个数的k次方的因子数之和
- 基于RandomAccessFile实现断点文件下载功能
- 【OpenCV3.3】搭建VS2017+Android开发环境
- Android 0基础从入门到精通
- tensorflow 实现rnn
- 【OpenCV3.3】SVM与字符分类示例
- HDU6077-Time To Get Up
- Python3.x安装numpy和matplotlib的问题
- 2017 暑假艾教集训 day5
- Hibernate多对多操作
- python SyntaxError: non-default argument follows default argument
- HDU.2149 Public Sale (博弈论 巴什博弈)
- Android学习笔记17-自定义控件