RNN的简单理解

来源:互联网 发布:网络商务英语 编辑:程序博客网 时间:2024/05/20 23:07

Part 1 

在本文中,我们会构造一个RNN接受一个二进制的X序列输入,来预测一个二进制序列Y输出。序列按如下方式构造:


输入序列X : 在时间步t, Xt有一半的几率为0,另一半几率为1,X可能是[1,0,0,1,1,...].

输出序列Y :   在时间步t,Yt有50%的几率为0,另一半几率为1。

  •                       如果X(t-3)是1,那么Yt为1的几率增加50%;
  •                       如果X(t-8)是1,那么Yt为1的几率减少25%;
  •                       如果X(t-3)和X(t-8)同时为1,那么Y(t)为1的几率是50%+50%-25%=75%.

因此,数据中存在两个依赖:t-3和t-8。

可以通过计算出交叉熵,来判断RNN是否学习到了这两个依赖。

  1.   如果没有学习到依赖性,那么有62.5%的几率为1,交叉熵应该是0.66.
  2.   如果仅仅学习到了t-3的依赖,交叉熵应该是0.52
  3.   如果两个依赖都学习到了,交叉熵应该是0.45
下面是计算公式:
import numpy as npprint("Expected cross entropy loss if the model:")print("- learns neither dependency:", -(0.625 * np.log(0.625) +                                      0.375 * np.log(0.375)))# Learns first dependency only ==> 0.51916669970720941print("- learns first dependency:  ",      -0.5 * (0.875 * np.log(0.875) + 0.125 * np.log(0.125))      -0.5 * (0.625 * np.log(0.625) + 0.375 * np.log(0.375)))print("- learns both dependencies: ", -0.50 * (0.75 * np.log(0.75) + 0.25 * np.log(0.25))      - 0.25 * (2 * 0.50 * np.log (0.50)) - 0.25 * (0))

Expected cross entropy loss if the model:- learns neither dependency: 0.661563238158- learns first dependency:   0.519166699707- learns both dependencies:  0.454454367449

这个模型会越简单越好,在时间步t ,模型接受一个二进制的输出序列Xt向量和一个前步的状态向量St-1,作为输入。
输出一个状态向量St, 和一个预测的概率分布向量Pt,  以拟合二进制的输出序列Yt向量。

St = tanh(W (Xt @ St-1)+ bs)

Pt = softmax(U*St + bp)

@代表向量的连接操作








为了建立一个tensorflow的模型,首先你得把模型表示成一个图,然后执行这个图。
关键的问题是,图有多宽?我们的图一次接受多少time steps的输入?

每一个time step都是重复的,G代表了一个时间步:G(Xt,St-1) -> (Pt,St)。
我们可以每一个时间步就执行一次图。这样对一个已经训练好的模型来说是有效的,但问题是训练的过程:
在反向传播时梯度的计算是以图上限的。我们只能反向传播误差到现在的时间步,不能传播这个误差到上一个时间步。
这就意味着我们的网络无法学习到长期的依赖条件。

一个选择是,我们可以使我们的图和数据序列的长度一样宽,这样做通常是有效的(除非输入序列任意长)。如果说我们的
序列接收长度为10000的序列,来自第9999步的误差被一路传播到时间步o上。这样的计算代价非常
昂贵,不仅低效而且存在梯度消失、爆炸的问题。(将误差反向传播很多时间步通过会引起梯度消失、爆炸问题,这可以通过链式
求导的公式就可以简单理解)

一个常用的应对长序列的办法是"截断"我们的反向传播,以一个最大步数的限制来反向传播误差。我们选择n作为模型超参数,记住
一个权衡法则:越大的n可以捕捉越长期的依赖关系,但是计算代价也会越昂贵

n步反向传播误差的一个直观的解释就是我们把每一个可能的误差分为n步传播回去。如果我们有一个长度为49的序列,选择n=7,那么
要在整整7步中传播42个误差,但这不是在tensorflow中使用的方法。Tensorflow用的方法是把图限制到n单元的宽度:截断反向传播
很容易实现,只要一次输入一个长度为n的序列,每一次迭代作一个回传就能可以。这里的意思是,我们把长度为49的序列拆成7个
长度为7的子序列,然后分为单独的7次计算过程,在每个计算图中只有第七次输入才会反向传播整个7步得到的误差。因此,
即便你认为大于7步的数据不再会有依赖性,但为了提高7步传播误差的比例,仍然有意义去尝试一下n>7的情况。



我们的图是n单元(时间步)宽,每一个单元都是完全一样的(共享参数)。构图最简单的方法是并行的方式,这是关键:最简单的代表复制
类型tensor(RNN的输入、输出,预测和损失)的方式是用list。


每执行一次图,就会训练一步,同时给出最后的状态传给下一次执行任务。

import numpy as npimport tensorflow as tf%matplotlib inlineimport matplotlib.pyplot as plt

# Global config variablesnum_steps = 5 # number of truncated backprop steps ('n' in the discussion above)batch_size = 200num_classes = 2state_size = 4learning_rate = 0.1

def gen_data(size=1000000):    X = np.array(np.random.choice(2, size=(size,)))    Y = []    for i in range(size):        threshold = 0.5        if X[i-3] == 1:            threshold += 0.5        if X[i-8] == 1:            threshold -= 0.25        if np.random.rand() > threshold:            Y.append(0)        else:            Y.append(1)    return X, np.array(Y)# adapted from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/rnn/ptb/reader.pydef gen_batch(raw_data, batch_size, num_steps):    raw_x, raw_y = raw_data    data_length = len(raw_x)    # partition raw data into batches and stack them vertically in a data matrix    batch_partition_length = data_length // batch_size    data_x = np.zeros([batch_size, batch_partition_length], dtype=np.int32)    data_y = np.zeros([batch_size, batch_partition_length], dtype=np.int32)    for i in range(batch_size):        data_x[i] = raw_x[batch_partition_length * i:batch_partition_length * (i + 1)]        data_y[i] = raw_y[batch_partition_length * i:batch_partition_length * (i + 1)]    # further divide batch partitions into num_steps for truncated backprop    epoch_size = batch_partition_length // num_steps    for i in range(epoch_size):        x = data_x[:, i * num_steps:(i + 1) * num_steps]        y = data_y[:, i * num_steps:(i + 1) * num_steps]        yield (x, y)def gen_epochs(n, num_steps):    for i in range(n):        yield gen_batch(gen_data(), batch_size, num_steps)

Model

"""Placeholders"""x = tf.placeholder(tf.int32, [batch_size, num_steps], name='input_placeholder')y = tf.placeholder(tf.int32, [batch_size, num_steps], name='labels_placeholder')init_state = tf.zeros([batch_size, state_size])"""RNN Inputs"""# Turn our x placeholder into a list of one-hot tensors:# rnn_inputs is a list of num_steps tensors with shape [batch_size, num_classes]x_one_hot = tf.one_hot(x, num_classes)rnn_inputs = tf.unstack(x_one_hot, axis=1)
"""Definition of rnn_cellThis is very similar to the __call__ method on Tensorflow's BasicRNNCell. See:https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py#L95"""with tf.variable_scope('rnn_cell'):    W = tf.get_variable('W', [num_classes + state_size, state_size])    b = tf.get_variable('b', [state_size], initializer=tf.constant_initializer(0.0))def rnn_cell(rnn_input, state):    with tf.variable_scope('rnn_cell', reuse=True):        W = tf.get_variable('W', [num_classes + state_size, state_size])        b = tf.get_variable('b', [state_size], initializer=tf.constant_initializer(0.0))    return tf.tanh(tf.matmul(tf.concat([rnn_input, state], 1), W) + b)

"""Adding rnn_cells to graphThis is a simplified version of the "static_rnn" function from Tensorflow's api. See:https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/rnn/python/ops/core_rnn.py#L41Note: In practice, using "dynamic_rnn" is a better choice that the "static_rnn":https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/rnn.py#L390"""state = init_staternn_outputs = []for rnn_input in rnn_inputs:    state = rnn_cell(rnn_input, state)    rnn_outputs.append(state)final_state = rnn_outputs[-1]
"""Predictions, loss, training stepLosses is similar to the "sequence_loss"function from Tensorflow's API, except that here we are using a list of 2D tensors, instead of a 3D tensor. See:https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/seq2seq/python/ops/loss.py#L30"""#logits and predictionswith tf.variable_scope('softmax'):    W = tf.get_variable('W', [state_size, num_classes])    b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))logits = [tf.matmul(rnn_output, W) + b for rnn_output in rnn_outputs]predictions = [tf.nn.softmax(logit) for logit in logits]# Turn our y placeholder into a list of labelsy_as_list = tf.unstack(y, num=num_steps, axis=1)#losses and train_steplosses = [tf.nn.sparse_softmax_cross_entropy_with_logits(labels=label, logits=logit) for \          logit, label in zip(logits, y_as_list)]total_loss = tf.reduce_mean(losses)train_step = tf.train.AdagradOptimizer(learning_rate).minimize(total_loss)

"""Train the network"""def train_network(num_epochs, num_steps, state_size=4, verbose=True):    with tf.Session() as sess:        sess.run(tf.global_variables_initializer())        training_losses = []        for idx, epoch in enumerate(gen_epochs(num_epochs, num_steps)):            training_loss = 0            training_state = np.zeros((batch_size, state_size))            if verbose:                print("\nEPOCH", idx)            for step, (X, Y) in enumerate(epoch):                tr_losses, training_loss_, training_state, _ = \                    sess.run([losses,                              total_loss,                              final_state,                              train_step],                                  feed_dict={x:X, y:Y, init_state:training_state})                training_loss += training_loss_                if step % 100 == 0 and step > 0:                    if verbose:                        print("Average loss at step", step,                              "for last 250 steps:", training_loss/100)                    training_losses.append(training_loss/100)                    training_loss = 0    return training_losses

training_losses = train_network(1,num_steps)plt.plot(training_losses)

EPOCH 0Average loss at step 100 for last 250 steps: 0.6559883219Average loss at step 200 for last 250 steps: 0.617185292244Average loss at step 300 for last 250 steps: 0.595771013498Average loss at step 400 for last 250 steps: 0.568864737153Average loss at step 500 for last 250 steps: 0.524139249921Average loss at step 600 for last 250 steps: 0.522666031122Average loss at step 700 for last 250 steps: 0.522012578249Average loss at step 800 for last 250 steps: 0.519179680347Average loss at step 900 for last 250 steps: 0.519965928495

网络非常快的学习到了第一个依赖,得到交叉熵损失为0.52。我们可以通过增加steps和状态的大小,使网络学习到第二个依赖


最后,我们把模型转换成tensorflowAPI用的形式

"""Definition of rnn_cellThis is very similar to the __call__ method on Tensorflow's BasicRNNCell. See:https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py#L95"""with tf.variable_scope('rnn_cell'):    W = tf.get_variable('W', [num_classes + state_size, state_size])    b = tf.get_variable('b', [state_size], initializer=tf.constant_initializer(0.0))def rnn_cell(rnn_input, state):    with tf.variable_scope('rnn_cell', reuse=True):        W = tf.get_variable('W', [num_classes + state_size, state_size])        b = tf.get_variable('b', [state_size], initializer=tf.constant_initializer(0.0))    return tf.tanh(tf.matmul(tf.concat([rnn_input, state], 1), W) + b)"""Adding rnn_cells to graphThis is a simplified version of the "static_rnn" function from Tensorflow's api. See:https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/rnn/python/ops/core_rnn.py#L41Note: In practice, using "dynamic_rnn" is a better choice that the "static_rnn":https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/rnn.py#L390"""state = init_staternn_outputs = []for rnn_input in rnn_inputs:    state = rnn_cell(rnn_input, state)    rnn_outputs.append(state)final_state = rnn_outputs[-1]
上面这部分可以简化成如下代码

cell = tf.contrib.rnn.BasicRNNCell(state_size)rnn_outputs, final_state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=init_state)

我们也可以使用动态RNN,动态指的是在执行的过程中动态的创建图,这样更加高效。

最终的模型

 staticRNN

"""Placeholders"""x = tf.placeholder(tf.int32, [batch_size, num_steps], name='input_placeholder')y = tf.placeholder(tf.int32, [batch_size, num_steps], name='labels_placeholder')init_state = tf.zeros([batch_size, state_size])"""Inputs"""x_one_hot = tf.one_hot(x, num_classes)rnn_inputs = tf.unstack(x_one_hot, axis=1)"""RNN"""cell = tf.contrib.rnn.BasicRNNCell(state_size)rnn_outputs, final_state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=init_state)"""Predictions, loss, training step"""with tf.variable_scope('softmax'):    W = tf.get_variable('W', [state_size, num_classes])    b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))logits = [tf.matmul(rnn_output, W) + b for rnn_output in rnn_outputs]predictions = [tf.nn.softmax(logit) for logit in logits]y_as_list = tf.unstack(y, num=num_steps, axis=1)losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(labels=label, logits=logit) for \          logit, label in zip(logits, y_as_list)]total_loss = tf.reduce_mean(losses)train_step = tf.train.AdagradOptimizer(learning_rate).minimize(total_loss)


dynamicRNN

"""Placeholders"""x = tf.placeholder(tf.int32, [batch_size, num_steps], name='input_placeholder')y = tf.placeholder(tf.int32, [batch_size, num_steps], name='labels_placeholder')init_state = tf.zeros([batch_size, state_size])"""Inputs"""rnn_inputs = tf.one_hot(x, num_classes)"""RNN"""cell = tf.contrib.rnn.BasicRNNCell(state_size)rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs, initial_state=init_state)"""Predictions, loss, training step"""with tf.variable_scope('softmax'):    W = tf.get_variable('W', [state_size, num_classes])    b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))logits = tf.reshape(            tf.matmul(tf.reshape(rnn_outputs, [-1, state_size]), W) + b,            [batch_size, num_steps, num_classes])predictions = tf.nn.softmax(logits)losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)total_loss = tf.reduce_mean(losses)train_step = tf.train.AdagradOptimizer(learning_rate).minimize(total_loss)





原创粉丝点击