static_rnn 和dynamic_rnn的区别
来源:互联网 发布:马自达6 知乎 编辑:程序博客网 时间:2024/06/06 00:06
最近在看tensorflow的api接口,发现tensorflow中提供了rnn接口有两种,第一种是静态的rnn,另外一种是动态的rnn,这两种区别查了一些资料其中:https://stackoverflow.com/questions/39734146/whats-the-difference-between-tensorflow-dynamic-rnn-and-rnn
说的比较清楚,原文如下:
tf.nn.rnn creates an unrolled graph for a fixed RNN length. That means, if you call tf.nn.rnn with inputs having 200 time steps you are creating a static graph with 200 RNN steps. First, graph creation is slow. Second, you’re unable to pass in longer sequences (> 200) than you’ve originally specified.
tf.nn.dynamic_rnn solves this. It uses a tf.While loop to dynamically construct the graph when it is executed. That means graph creation is faster and you can feed batches of variable size.
中文大概意思是说:
tf.nn.rnn创建一个展开图的一个固定的网络长度。这意味着,如果有200次输入的步骤你与200步骤创建一个静态的图tf.nn.rnn RNN。首先,创建graphh较慢。第二,您无法传递比最初指定的更长的序列(> 200)。但是动态的tf.nn.dynamic_rnn解决这。当它被执行时,它使用循环来动态构建图形。这意味着图形创建速度更快,并且可以提供可变大小的批处理。
这里就说的比较清楚了,下面 看下编程有啥不同的:
静态的rnn:
def RNN(_X,weights,biases):
#第一种用static_rnn效果
_X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
_X = tf.reshape(_X, [-1, n_inputs]) # (n_steps*batch_size, n_input)
_X = tf.matmul(_X, weights['in']) + biases['in']
lstm_cell =tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis, forget_bias=1.0)
_init_state=lstm_cell.zero_state(batch_size,dtype=tf.float32)
_X = tf.split(_X, n_step,0 ) # n_steps * (batch_size, n_hidden)
outputs, states =tf.nn.static_rnn(lstm_cell, _X, initial_state=_init_state)
#第二种用dynamic_rnn,是是这种效果
# lstm_cell =tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis)
# outputs, states =tf.nn.dynamic_rnn(lstm_cell, _X, dtype=tf.float32)
# outputs = tf.transpose(outputs, [1, 0, 2])
# Get inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
动态的rnn:
def RNN(_X,weights,biases):
#第一种用static_rnn效果
# _X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
# _X = tf.reshape(_X, [-1, n_inputs]) # (n_steps*batch_size, n_input)
# _X = tf.matmul(_X, weights['in']) + biases['in']
# lstm_cell =tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis, forget_bias=1.0)
# _init_state=lstm_cell.zero_state(batch_size,dtype=tf.float32)
# _X = tf.split(_X, n_step,0 ) # n_steps * (batch_size, n_hidden)
# outputs, states =tf.nn.static_rnn(lstm_cell, _X, initial_state=_init_state)
#第二种用dynamic_rnn,是是这种效果
lstm_cell =tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis)
outputs, states =tf.nn.dynamic_rnn(lstm_cell, _X, dtype=tf.float32)
outputs = tf.transpose(outputs, [1, 0, 2])
# Get inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
阅读全文
0 0
- static_rnn 和dynamic_rnn的区别
- tf.contrib.rnn.static_rnn与tf.nn.dynamic_rnn区别
- tensorflow dynamic_rnn与static_rnn使用注意
- dynamic_rnn的参数time_major
- tensor flow dynamic_rnn 与rnn有啥区别?
- TensorFlow笔记:dynamic_rnn
- tensorflow之dynamic_rnn
- tensorflow教程:tf.dynamic_rnn
- tensorflow使用tf.dynamic_rnn技巧
- tensorflow dynamic_rnn() decay_steps,decay_rate,embedding_lookup( )
- 和和的区别
- & 和 &&、|和||的区别:
- &和&&、|和||的区别
- &&和&,||和|的区别
- &和&&、|和||的区别
- &和&&、|和||的区别
- ../和./和/的区别
- &和&&,|和||的区别
- 直播 | FreeWheel基于Kubernetes容器云构建与实践:应用编排与服务质量保证
- java里poi操作excel的工具类(兼容各版本)
- 鼠标偏移量
- linux下memcached的启动/结束的方式
- 数据库基本概念
- static_rnn 和dynamic_rnn的区别
- ssm框架-spring+springmvc+mybatis+eclipse+oracle+tomcat小项目
- Android Activity 、 Window 、 View之间的关系
- A. Fair Game
- web前端之百度首页仿写
- socket实现简单的上传下载
- 虚拟机vmware12pro中安装win7系统时,点击一键安装win7到c盘就会出现dos工具箱
- 实现页面的侧滑
- Intellij idea 连接mysql数据库