文章标题

来源:互联网 发布:百度源码 编辑:程序博客网 时间:2024/06/01 16:39

全部代码:点击这里查看
关于Tensorflow实现一个简单的二元序列的例子可以点击这里查看
关于RNN和LSTM的基础可以查看这里
这篇博客主要包含以下内容
训练一个RNN模型逐字符生成文本数据(最后的部分)
使用Tensorflow的scan函数实现dynamic_rnn动态创建的效果
使用multiple RNN创建多层的RNN
实现Dropout和Layer Normalization的功能

一、模型说明和数据处理

1、模型说明

我们要使用RNN学习一个语言模型(language model)去生成字符序列
githbub上有别人实现好的
Torch中的实现:https://github.com/karpathy/char-rnn
Tensorflow中的实现:https://github.com/sherjilozair/char-rnn-tensorflow
接下来我们来看如何实现

2、数据处理

数据集使用莎士比亚的一段文集,点击这里查看, 实际也可以使用别的
大小写字符视为不同的字符
下载并读取数据

'''下载数据并读取数据'''file_url = 'https://raw.githubusercontent.com/jcjohnson/torch-rnn/master/data/tiny-shakespeare.txt'file_name = 'tinyshakespeare.txt'if not os.path.exists(file_name):    urllib.request.urlretrieve(file_url, filename=file_name)with open(file_name, 'r') as f:    raw_data = f.read()    print("数据长度", len(raw_data))

处理字符数据,转换为数字

使用set去重,得到所有的唯一字符
然后一个字符对应一个数字(使用字典)
然后遍历原始数据,得到所有字符对应的数字

'''处理字符数据,转换为数字'''vocab = set(raw_data)                    # 使用set去重,这里就是去除重复的字母(大小写是区分的)vocab_size = len(vocab)      idx_to_vocab = dict(enumerate(vocab))    # 这里将set转为了字典,每个字符对应了一个数字0,1,2,3..........(vocab_size-1)vocab_to_idx = dict(zip(idx_to_vocab.values(), idx_to_vocab.keys())) # 这里将字典的(key, value)转换成(value, key)data = [vocab_to_idx[c] for c in raw_data]   # 处理raw_data, 根据字符,得到对应的value,就是数字

del raw_data
生成batch数据

Tensorflow models给出的PTB模型:https://github.com/tensorflow/models/tree/master/tutorials/rnn/ptb

'''超参数'''num_steps=200             # 学习的步数batch_size=32state_size=100            # cell的sizenum_classes = vocab_sizelearning_rate = 1e-4def gen_epochs(num_epochs, num_steps, batch_size):    for i in range(num_epochs):        yield reader.ptb_iterator_oldversion(data, batch_size, num_steps)

ptb_iterator函数实现:
返回数据X,Y的shape=[batch_size, num_steps]

def ptb_iterator_oldversion(raw_data, batch_size, num_steps):  """Iterate on the raw PTB data.  This generates batch_size pointers into the raw PTB data, and allows  minibatch iteration along these pointers.  Args:  raw_data: one of the raw data outputs from ptb_raw_data.  batch_size: int, the batch size.  num_steps: int, the number of unrolls.  Yields:  Pairs of the batched data, each a matrix of shape [batch_size, num_steps].  The second element of the tuple is the same data time-shifted to the  right by one.  Raises:  ValueError: if batch_size or num_steps are too high.  """  raw_data = np.array(raw_data, dtype=np.int32)  data_len = len(raw_data)  batch_len = data_len // batch_size  data = np.zeros([batch_size, batch_len], dtype=np.int32)  for i in range(batch_size):    data[i] = raw_data[batch_len * i:batch_len * (i + 1)]  epoch_size = (batch_len - 1) // num_steps  if epoch_size == 0:    raise ValueError("epoch_size == 0, decrease batch_size or num_steps")  for i in range(epoch_size):    x = data[:, i*num_steps:(i+1)*num_steps]    y = data[:, i*num_steps+1:(i+1)*num_steps+1]    yield (x, y)

二、使用tf.scan函数和dynamic_rnn

1、为什么使用tf.scan和dynamic_rnn

之前我们实现的第一个例子中没有用dynamic_rnn的部分是将输入的三维数据[batch_size,num_steps, state_size]按num_steps维度进行拆分,然后每计算一步都存到list列表中,如下图
计算之后存到list中

这种构建方式很耗时,在我们例子中没有体现出来,但是如果我们要学习的步数很大(num_steps,也可以说要学习的依赖关系很长),如果再使用深层的RNN,这种就不合适了
为了方便比较和dynamic_rnn的运行耗时,下面还是给出使用list

2、使用list的方式(static_rnn)

构建计算图

我这里tensorflow的版本是1.2.0,与1.0 些许不一样
和之前的例子差不多,这里不再累述

'''使用list的方式'''def build_basic_rnn_graph_with_list(    state_size = state_size,    num_classes = num_classes,    batch_size = batch_size,    num_steps = num_steps,    num_layers = 3,    learning_rate = learning_rate):    reset_graph()    x = tf.placeholder(tf.int32, [batch_size, num_steps], name='x')    y = tf.placeholder(tf.int32, [batch_size, num_steps], name='y')    x_one_hot = tf.one_hot(x, num_classes)   # (batch_size, num_steps, num_classes)    '''这里按第二维拆开num_steps*(batch_size, num_classes)'''    rnn_inputs = [tf.squeeze(i,squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]    cell = tf.nn.rnn_cell.BasicRNNCell(state_size)    init_state = cell.zero_state(batch_size, tf.float32)    '''使用static_rnn方式'''    rnn_outputs, final_state = tf.contrib.rnn.static_rnn(cell=cell, inputs=rnn_inputs,                                                         initial_state=init_state)    #rnn_outputs, final_state = tf.nn.rnn(cell, rnn_inputs, initial_state=init_state) # tensorflow 1.0的方式    with tf.variable_scope('softmax'):        W = tf.get_variable('W', [state_size, num_classes])        b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))    logits = [tf.matmul(rnn_output, W) + b for rnn_output in rnn_outputs]    y_as_list = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(y, num_steps, 1)]    #loss_weights = [tf.ones([batch_size]) for i in range(num_steps)]    losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y_as_list,                                                   logits=logits)    #losses = tf.nn.seq2seq.sequence_loss_by_example(logits, y_as_list, loss_weights)  # tensorflow 1.0的方式    total_loss = tf.reduce_mean(losses)    train_step = tf.train.AdamOptimizer(learning_rate).minimize(total_loss)    return dict(        x = x,        y = y,        init_state = init_state,        final_state = final_state,        total_loss = total_loss,        train_step = train_step    )

训练神经网络函数

和之前例子类似

'''训练rnn网络的函数'''def train_rnn(g, num_epochs, num_steps=num_steps, batch_size=batch_size, verbose=True, save=False):    tf.set_random_seed(2345)    with tf.Session() as sess:        sess.run(tf.initialize_all_variables())        training_losses = []        for idx, epoch in enumerate(gen_epochs(num_epochs, num_steps, batch_size)):            training_loss = 0            steps = 0            training_state = None            for X, Y in epoch:                steps += 1                feed_dict = {g['x']: X, g['y']: Y}                if training_state is not None:                    feed_dict[g['init_state']] = training_state                training_loss_, training_state, _ = sess.run([g['total_loss'],                                                           g['final_state'],                                                           g['train_step']],                                                          feed_dict=feed_dict)                training_loss += training_loss_             if verbose:                print('epoch: {0}的平均损失值:{1}'.format(idx, training_loss/steps))            training_losses.append(training_loss/steps)        if isinstance(save, str):            g['saver'].save(sess, save)    return training_losses

调用执行:

start_time = time.time()g = build_basic_rnn_graph_with_list()print("构建图耗时", time.time()-start_time)start_time = time.time()train_rnn(g, 3)print("训练耗时:", time.time()-start_time)

运行结果

构建计算图耗时: 113.43532419204712
3个epoch运行耗时:
epoch: 0的平均损失值:3.6314958388777985
epoch: 1的平均损失值:3.287133811534136
epoch: 2的平均损失值:3.250853428895446
训练耗时: 84.2816972732544
可以看出在构建图的时候非常耗时,这里仅仅一层的cell

3、dynamic_rnn的使用

之前在我们第一个例子中实际已经使用过了,这里使用MultiRNNCell实现多层cell,具体下面再讲
构建模型:
tf.nn.embedding_lookup(params, ids)函数是在params中查找ids的表示, 和在matrix中用array索引类似, 这里是在二维embeddings中找二维的ids, ids每一行中的一个数对应embeddings中的一行,所以最后是[batch_size, num_steps, state_size],关于具体的输出可以查看这里
这里我认为就是某个字母的表示,之前上面我们的statci_rnn就是one-hot来表示的

'''使用dynamic_rnn方式   - 之前我们自己实现的cell和static_rnn的例子都是将得到的tensor使用list存起来,这种方式构建计算图时很慢   - dynamic可以在运行时构建计算图'''def build_multilayer_lstm_graph_with_dynamic_rnn(    state_size = state_size,    num_classes = num_classes,    batch_size = batch_size,    num_steps = num_steps,    num_layers = 3,    learning_rate = learning_rate    ):    reset_graph()    x = tf.placeholder(tf.int32, [batch_size, num_steps], name='x')    y = tf.placeholder(tf.int32, [batch_size, num_steps], name='y')    embeddings = tf.get_variable(name='embedding_matrix', shape=[num_classes, state_size])    '''这里的输入是三维的[batch_size, num_steps, state_size]        - embedding_lookup(params, ids)函数是在params中查找ids的表示, 和在matrix中用array索引类似,          这里是在二维embeddings中找二维的ids, ids每一行中的一个数对应embeddings中的一行,所以最后是[batch_size, num_steps, state_size]    '''    rnn_inputs = tf.nn.embedding_lookup(params=embeddings, ids=x)    cell = tf.nn.rnn_cell.LSTMCell(num_units=state_size, state_is_tuple=True)    cell = tf.nn.rnn_cell.MultiRNNCell(cells=[cell]*num_layers, state_is_tuple=True)    init_state = cell.zero_state(batch_size, dtype=tf.float32)    '''使用dynamic_rnn方式'''    rnn_outputs, final_state = tf.nn.dynamic_rnn(cell=cell, inputs=rnn_inputs,                                                  initial_state=init_state)        with tf.variable_scope('softmax'):        W = tf.get_variable('W', [state_size, num_classes])        b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))    rnn_outputs = tf.reshape(rnn_outputs, [-1, state_size])   # 转成二维的矩阵    y_reshape = tf.reshape(y, [-1])    logits = tf.matmul(rnn_outputs, W) + b                    # 进行矩阵运算    total_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_reshape))    train_step = tf.train.AdamOptimizer(learning_rate).minimize(total_loss)    return dict(x = x,                y = y,                init_state = init_state,                final_state = final_state,                total_loss = total_loss,                train_step = train_step)

调用执行即可

start_time = time.time()g = build_multilayer_lstm_graph_with_dynamic_rnn()print("构建图耗时", time.time()-start_time)start_time = time.time()train_rnn(g, 3)print("训练耗时:", time.time()-start_time)

运行结果(注意这是3层的LSTM):
构建计算图耗时 7.616888523101807,相比第一种static_rnn很快
训练耗时(这是3层的LSTM,所以还是挺慢的):
epoch: 0的平均损失值:3.604653576324726
epoch: 1的平均损失值:3.3202743626188957
epoch: 2的平均损失值:3.3155322650383257
训练耗时: 303.5468375682831

4、tf.scan实现的方式

如果你不了解tf.scan,建议看下官方API, 还是有点复杂的。
或者Youtube上有个介绍,点击这里查看
scan是个高阶函数,一般的计算方式是:给定一个序列[x0,x1,…..,xn]和初试状态y−1,根据yt=f(xt,yt−1) 计算得到最终序列[y0,y1,……,yn]
构建计算图
tf.transpose(rnn_inputs, [1,0,2]) 是将rnn_inputs的第一个和第二个维度调换,即变成[num_steps,batch_size, state_size], 在dynamic_rnn函数有个time_major参数,就是指定num_steps是否在第一个维度上,默认是false的,即不在第一维
tf.scan会将elems按照第一维拆开,所以一次就是一个step的数据(和我们static_rnn的例子类似)
参数a的结构和initializer的结构一致,所以a[1]就是对应的state,cell需要传入x和state计算
每次迭代cell返回的是一个rnn_output, shape=(batch_size,state_size)和对应的state,num_steps之后的rnn_outputs的shape就是(num_steps, batch_size, state_size) ,state同理
每次输入的x都会得到的state–>(final_states),我们只要的最后的final_state

'''使用scan实现dynamic_rnn的效果'''def build_multilayer_lstm_graph_with_scan(    state_size = state_size,    num_classes = num_classes,    batch_size = batch_size,    num_steps = num_steps,    num_layers = 3,    learning_rate = learning_rate    ):    reset_graph()    x = tf.placeholder(tf.int32, [batch_size, num_steps], name='x')    y = tf.placeholder(tf.int32, [batch_size, num_steps], name='y')    embeddings = tf.get_variable(name='embedding_matrix', shape=[num_classes, state_size])    '''这里的输入是三维的[batch_size, num_steps, state_size]'''    rnn_inputs = tf.nn.embedding_lookup(params=embeddings, ids=x)    '''构建多层的cell, 先构建一个cell, 然后使用MultiRNNCell函数构建即可'''    cell = tf.nn.rnn_cell.LSTMCell(num_units=state_size, state_is_tuple=True)    cell = tf.nn.rnn_cell.MultiRNNCell(cells=[cell]*num_layers, state_is_tuple=True)      init_state = cell.zero_state(batch_size, dtype=tf.float32)    '''使用tf.scan方式       - tf.transpose(rnn_inputs, [1,0,2])  是将rnn_inputs的第一个和第二个维度调换,即[num_steps,batch_size, state_size],           在dynamic_rnn函数有个time_major参数,就是指定num_steps是否在第一个维度上,默认是false的,即不在第一维       - tf.scan会将elems按照第一维拆开,所以一次就是一个step的数据(和我们static_rnn的例子类似)       - a的结构和initializer的结构一致,所以a[1]就是对应的state,cell需要传入x和state计算       - 每次迭代cell返回的是一个rnn_output(batch_size,state_size)和对应的state,num_steps之后的rnn_outputs的shape就是(num_steps, batch_size, state_size)       - 每次输入的x都会得到的state(final_states),我们只要的最后的final_state    '''    def testfn(a, x):        return cell(x, a[1])    rnn_outputs, final_states = tf.scan(fn=testfn, elems=tf.transpose(rnn_inputs, [1,0,2]),                                        initializer=(tf.zeros([batch_size,state_size]),init_state)                                        )    '''或者使用lambda的方式'''    #rnn_outputs, final_states = tf.scan(lambda a,x: cell(x, a[1]), tf.transpose(rnn_inputs, [1,0,2]),                                        #initializer=(tf.zeros([batch_size, state_size]),init_state))    final_state = tuple([tf.nn.rnn_cell.LSTMStateTuple(        tf.squeeze(tf.slice(c, [num_steps-1,0,0], [1,batch_size,state_size])),        tf.squeeze(tf.slice(h, [num_steps-1,0,0], [1,batch_size,state_size]))) for c, h in final_states])    with tf.variable_scope('softmax'):        W = tf.get_variable('W', [state_size, num_classes])        b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))    rnn_outputs = tf.reshape(rnn_outputs, [-1, state_size])    y_reshape = tf.reshape(y, [-1])    logits = tf.matmul(rnn_outputs, W) + b    total_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_reshape))    train_step = tf.train.AdamOptimizer(learning_rate).minimize(total_loss)    return dict(x = x,                y = y,                init_state = init_state,                final_state = final_state,                total_loss = total_loss,                train_step = train_step)

运行结果
构建计算图耗时: 8.685610055923462 (比dynamic_rnn稍微慢一点)
训练耗时(和dynamic_rnn耗时差不多)
使用scan的方式只比dynamic_rnn慢一点点,但是对我们来说更加灵活和清楚执行的过程。也方便我们修改代码(比如从state的t-2时刻跳过一个时刻直接到t)
epoch: 0的平均损失值:3.6226147892831384
epoch: 1的平均损失值:3.3211338095281318
epoch: 2的平均损失值:3.3158331972429123
训练耗时: 303.2535448074341

三、关于多层RNN

1、结构

LSTM中包含两个state,一个是c记忆单元(memory cell),另外一个是h隐藏状态(hidden state), 在Tensorflow中是以tuple元组的形式,所以才有上面构建dynamic_rnn时的参数state_is_tuple的参数,这种方式执行更快
多层的结构如下图
多层RNN
我们可以将其包装起来, 看起来像一个cell一样
包装 cell

2、代码

Tensorflow中的实现就是使用tf.nn.rnn_cell.MultiRNNCell
声明一个cell
MultiRNNCell中传入[cell]*num_layers就可以了
注意如果是LSTM,定义参数state_is_tuple=True

cell = tf.nn.rnn_cell.LSTMCell(num_units=state_size, state_is_tuple=True)cell = tf.nn.rnn_cell.MultiRNNCell(cells=[cell]*num_layers, state_is_tuple=True)init_state = cell.zero_state(batch_size, dtype=tf.float32)

四、Dropout操作

应用在一层cell的输入和输出,不应用在循环的部分

1、一层的cell

static_rnn中实现
声明placeholder:keep_prob = tf.placeholder(tf.float32, name=’keep_prob’)
输入:rnn_inputs = [tf.nn.dropout(rnn_input, keep_prob) for rnn_input in rnn_inputs]
输出:rnn_outputs = [tf.nn.dropout(rnn_output, keep_prob) for rnn_output in rnn_outputs]
feed_dict中加入即可:feed_dict = {g[‘x’]: X, g[‘y’]: Y, g[‘keep_prob’]: keep_prob}
dynamic_rnn或者scan中实现
直接添加即可,其余类似:rnn_inputs = tf.nn.dropout(rnn_inputed, keep_prob)

2、多层cell

我们之前说使用MultiRNNCell将多层cell看作一个cell, 那么怎么实现对每层cell使用dropout呢
可以使用tf.nn.rnn_cell.DropoutWrapper来实现
方式一:cell = tf.nn.rnn_cell.DropoutWrapper(cell, input_keep_prob=input_keep_prob, output_keep_prob=output_drop_prob)
如果同时使用了input_keep_prob和output_keep_prob都是0.9, 那么层之间的drop_out=0.9*0.9=0.81
方式二: 对于basic cell只使用一个input_keep_prob或者output_keep_prob,对MultiRNNCell也使用一个input_keep_prob或者output_keep_prob

cell = tf.nn.rnn_cell.LSTMCell(num_units=state_size, state_is_tuple=True)cell = tf.nn.rnn_cell.DropoutWrapper(cell, input_keep_prob=keep_prob)cell = tf.nn.rnn_cell.MultiRNNCell(cells=[cell]*num_layers, state_is_tuple=True)cell = tf.nn.rnn_cell.DropoutWrapper(cell,output_keep_prob=keep_prob)

五、层标准化 (Layer Normalization)

1、说明

Layer Normalization是受Batch Normalization的启发而来,针对于RNN,可以查看相关论文
Batch Normalization主要针对于传统的深度神经网络和CNN,关于Batch Normalization的操作和推导可以看我之前的博客
可以加快训练的速度,得到更好的结果等

2、代码

找到LSTMCell的源码拷贝一份修改即可
layer normalization函数

传入的tensor是二维的,对其进行batch normalization操作
tf.nn.moment是计算tensor的mean value和variance value
然后对其进行缩放(scale)和平移(shift)

'''layer normalization'''def ln(tensor, scope=None, epsilon=1e-5):  assert(len(tensor.get_shape()) == 2)  m, v = tf.nn.moments(tensor, [1], keep_dims=True)  if not isinstance(scope, str):    scope = ''  with tf.variable_scope(scope+'layer_norm'):    scale = tf.get_variable(name='scale',                             shape=[tensor.get_shape()[1]],                             initializer=tf.constant_initializer(1))    shift = tf.get_variable('shift',                            [tensor.get_shape()[1]],                            initializer=tf.constant_initializer(0))  LN_initial = (tensor - m) / tf.sqrt(v + epsilon)  return LN_initial*scale + shift

LSTMCell中的call方法i,j,f,o调用layer normalization操作

_linear函数中的bias设为False, 因为BN会加上shift

'''这里bias设置为false, 因为bn会加上shift'''lstm_matrix = _linear([inputs, m_prev], 4 * self._num_units, bias=False)i, j, f, o = array_ops.split(    value=lstm_matrix, num_or_size_splits=4, axis=1)'''执行ln'''i = ln(i, scope = 'i/')j = ln(j, scope = 'j/')f = ln(f, scope = 'f/')o = ln(o, scope = 'o/')

构建计算图

可以选择RNN GRU LSTM
Dropout
Layer Normalization

'''最终的整合模型,   - 普通RNN,GRU,LSTM   - dropout   - BN'''from LayerNormalizedLSTMCell import LayerNormalizedLSTMCell # 导入layer normalization的LSTMCell 文件def build_final_graph(    cell_type = None,    state_size = state_size,    num_classes = num_classes,    batch_size = batch_size,    num_steps = num_steps,    num_layers = 3,    build_with_dropout = False,    learning_rate = learning_rate):    reset_graph()    x = tf.placeholder(tf.int32, [batch_size, num_steps], name='x')    y = tf.placeholder(tf.int32, [batch_size, num_steps], name='y')    keep_prob = tf.placeholder(tf.float32, name='keep_prob')    embeddings = tf.get_variable('embedding_matrix', [num_classes, state_size])    rnn_inputs = tf.nn.embedding_lookup(embeddings, x)    if cell_type == 'GRU':        cell = tf.nn.rnn_cell.GRUCell(state_size)    elif cell_type == 'LSTM':        cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True)    elif cell_type == 'LN_LSTM':        cell = LayerNormalizedLSTMCell(state_size)  # 自己修改的代码,导入对应的文件    else:        cell = tf.nn.rnn_cell.BasicRNNCell(state_size)    if build_with_dropout:        cell = tf.nn.rnn_cell.DropoutWrapper(cell, input_keep_prob=keep_prob)    init_state = cell.zero_state(batch_size, tf.float32)    '''dynamic_rnn'''    rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs, initial_state=init_state)    with tf.variable_scope('softmax'):        W = tf.get_variable('W', [state_size, num_classes])        b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))    rnn_outputs = tf.reshape(rnn_outputs, [-1, state_size])    y_reshaped = tf.reshape(y, [-1])    logits = tf.matmul(rnn_outputs, W) + b    predictions = tf.nn.softmax(logits)    total_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped))    train_step = tf.train.AdamOptimizer(learning_rate).minimize(total_loss)    return dict(        x = x,        y = y,        keep_prob = keep_prob,        init_state = init_state,        final_state = final_state,        total_loss = total_loss,        train_step = train_step,        preds = predictions,        saver = tf.train.Saver()    )

六、生成文本

1、说明

训练完成之后将计算图保存到本地磁盘,下次直接读取就可以了
我们给出第一个字符,RNN接着一个个生成字符,每次都是根据前一个字符
所以num_steps=1, batch_size=1(可以想象生成prediction的shape是(1, num_classes)中选择一个概率,–> num_steps=1 )

2、代码

构建图(直接传入参数即可):g = build_final_graph(cell_type=’LN_LSTM’, num_steps=1, batch_size=1)
生成文本

读取训练好的文件
得到给出的第一个字符对应的数字
循环遍历要生成多少个字符, 每次循环生成一个字符

'''生成文本'''def generate_characters(g, checkpoint, num_chars, prompt='A', pick_top_chars=None):    with tf.Session() as sess:        sess.run(tf.global_variables_initializer())        g['saver'].restore(sess, checkpoint)   # 读取文件        state = None        current_char = vocab_to_idx[prompt]    # 得到给出的字母对应的数字        chars = [current_char]                                  for i in range(num_chars):             # 总共生成多少数字            if state is not None:              # 第一次state为None,因为计算图中定义了刚开始为0                feed_dict={g['x']: [[current_char]], g['init_state']: state} # 传入当前字符            else:                feed_dict={g['x']: [[current_char]]}            preds, state = sess.run([g['preds'],g['final_state']], feed_dict)   # 得到预测结果(概率)preds的shape就是(1,num_classes)            if pick_top_chars is not None:              # 如果设置了概率较大的前多少个                p = np.squeeze(preds)                p[np.argsort(p)[:-pick_top_chars]] = 0  # 其余的置为0                p = p / np.sum(p)                       # 因为下面np.random.choice函数p的概率和要求是1,处理一下                current_char = np.random.choice(vocab_size, 1, p=p)[0]    # 根据概率选择一个            else:                current_char = np.random.choice(vocab_size, 1, p=np.squeeze(preds))[0]            chars.append(current_char)    chars = map(lambda x: idx_to_vocab[x], chars)    result = "".join(chars)    print(result)    return result

结果

由于训练耗时很长,这里使用LSTM训练了30个epoch,结果如下
可以自己调整参数,可能会得到更好的结果

ANKO: HFOFMFRone s the statlighte thithe thit.BODEN --I I's a tomir.I'tshis and on ar tald the theand this he sile be cares hat s ond tho fo hour he singe sime shind and somante tat ond treang tatsing of the an the to to fook.. Ir ard the with ane she stale..ANTE --KINEShow the ard and a beat the weringe be thing or.Bo hith tho he melan to the mute steres.The singer stis ard stis.BACE CANKONS CORESard the sids ing tho the the sackes tom theINWe stoe shit a dome thorate seomser hith.Thatthow oundTANTONT. SEAT THONTITE SERTI                         1  23SHe the mathe a tomonerind is ingit ofres treacentit. Sher stard on this the tor an the candin he whor he sath heres andstha dortour tit thas stand. I'd and or a

2017/06/25 运行结果更新

更换了一个大点的数据集,点击查看,使用了layer normalized的LSTM模型
参数设置:
num_steps=80
batch_size=50
state_size=512
num_classes = vocab_size
learning_rate = 5e-4
30个epochs
在实验室电脑跑了一晚上,结果是不是好一点了

AKTIN:  Yousa hand it have to turn you, sir.I have. I've got to here hard on myplay as a space state, and why hehappened. What we alwaws whothis?JOCASTAND :PADM You, sir!A battle. An arm of the ship is still.THE WINDEN'S CORUSHan's laser guns at the forest fire.  The crowd spots his blackfolkwark and sees the bedroom and twists and sees Leiawho is shaking.  A huge creature has a long time,hold her hand and his timmed, that we see the saulyand.  Thecrowd ruised by the staircase.EXT. MAZ' CASTLE RUINS - DAYRey and Wicket and CAMERA is heard.   Here as so they helfthis tonight, he spins and sit in a startled bright.LUKE(into propecy)The defenstity! Thank you.LUKEI'm afraid to have a lossing live,or help. We're

Reference

https://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html
https://karpathy.github.io/2015/05/21/rnn-effectiveness/
http://jmlr.org/proceedings/papers/v37/ioffe15.pdf
tensorflow scan:
https://www.tensorflow.org/api_docs/python/tf/scan
https://www.youtube.com/watch?v=A6qJMB3stE4&t=621s

原文地址: http://lawlite.me/2017/06/21/RNN-LSTM%E5%BE%AA%E7%8E%AF%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C-03Tensorflow%E8%BF%9B%E9%98%B6%E5%AE%9E%E7%8E%B0/

原创粉丝点击