单向RNN和双向RNN在mnist数据集上的分类实验
来源:互联网 发布:可以看央视的网络电视 编辑:程序博客网 时间:2024/05/22 06:59
RNN用于图像分类思路很奇特,不明觉厉,具体可以参考相关论文,rnn和birnn的实验:
#!/usr/bin/env python# -*- coding: utf-8 -*-# created by fhqplzj on 2017/06/19 下午10:28from __future__ import print_functionimport tensorflow as tffrom tensorflow.contrib import rnnfrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets('/Users/fhqplzj/github/TensorFlow-Examples/examples/3_NeuralNetworks/data', one_hot=True)learning_rate = 0.001training_iters = 100000batch_size = 128display_step = 10n_input = 28n_steps = 28n_hidden = 128n_classes = 10x = tf.placeholder(tf.float32, [None, n_steps, n_input])y = tf.placeholder(tf.float32, [None, n_classes])weights = { 'out1': tf.Variable(tf.random_normal([n_hidden, n_classes])), 'out2': tf.Variable(tf.random_normal([2 * n_hidden, n_classes]))}biases = { 'out': tf.Variable(tf.random_normal([n_classes]))}def RNN(x, weights, biases): x = tf.unstack(x, n_steps, 1) lstm_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0) outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32) return tf.matmul(outputs[-1], weights['out1']) + biases['out']def BiRNN(x, weights, biases): x = tf.unstack(x, n_steps, 1) lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0) lstm_bw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0) outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x, dtype=tf.float32) return tf.matmul(outputs[-1], weights['out2']) + biases['out']for func in (RNN, BiRNN): print(func.func_name.center(100, '+')) pred = func(x, weights, biases) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) step = 1 while step * batch_size < training_iters: batch_x, batch_y = mnist.train.next_batch(batch_size) batch_x = batch_x.reshape((-1, n_steps, n_input)) sess.run(optimizer, feed_dict={ x: batch_x, y: batch_y }) if step % display_step == 0: acc = sess.run(accuracy, feed_dict={ x: batch_x, y: batch_y }) loss = sess.run(cost, feed_dict={ x: batch_x, y: batch_y }) print('acc={:.6f},cost={:.6f}'.format(acc, loss)) step += 1 print('Optimization Finished!') total_len = 128 test_x, test_y = mnist.test.next_batch(total_len) test_x = test_x.reshape((-1, n_steps, n_input)) print(sess.run(accuracy, feed_dict={ x: test_x, y: test_y }))
阅读全文
0 0
- 单向RNN和双向RNN在mnist数据集上的分类实验
- tensorflow利用RNN和双向RNN实现MNIST分类问题
- Tensorflow-rnn(mnist分类)
- 用RNN做MNIST分类
- [深度学习框架] Keras上使用RNN进行mnist分类
- Tensorflow学习: RNN-LSTM应用于MNIST数据分类
- TensorFlow系列(3)——基于MNIST数据集的RNN实现
- CNN和RNN在文本分类过程中的区别整理
- CNN和RNN在NLP任务中的对比实验
- TensorFlow MNIST RNN LSTM
- TensorFlow实战:Chapter-7上(RNN简介和RNN在NLP应用)
- RNN实践一:LSTM实现MNIST数字分类
- RNN
- rnn
- RNN
- RNN
- RNN
- RNN
- 谈谈Mysql之事务
- Leetcode题目之求解数组之间的最大距离
- Maven仓库的简介,学习
- GCD相关知识点
- Kotlin简单实用方法既使用Kotlin优雅的开发Android应用
- 单向RNN和双向RNN在mnist数据集上的分类实验
- 最全最好用的Android Studio插件整理
- JavaScript -- 几道面试题
- STM32空闲中断+DMA解决接收不定长数据问题
- 命运
- C++容器——map
- Hybrid App
- 关于spring核心配置文件中的各项主要配置
- ElasticSearch 5.4 Linux安装教程