TensorFlow学习日记10

来源:互联网 发布:中影人 知乎 编辑:程序博客网 时间:2024/06/06 13:21

1. Bi-directional Recurrent Neural Network (LSTM)

解析:Build a bi-directional recurrent neural network (LSTM) to classify MNIST digits dataset.

from __future__ import print_functionimport tensorflow as tffrom tensorflow.contrib import rnnfrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("data/", one_hot=True)# Training Parameterslearning_rate = 0.001# training_steps = 10000training_steps = 200batch_size = 128display_step = 200# Network Parametersnum_input = 28  # MNIST data input (img shape: 28*28)timesteps = 28  # timestepsnum_hidden = 128  # hidden layer num of featuresnum_classes = 10  # MNIST total classes (0-9 digits)# tf Graph inputX = tf.placeholder("float", [None, timesteps, num_input])Y = tf.placeholder("float", [None, num_classes])# Define weightsweights = {    # Hidden layer weights => 2*n_hidden because of forward + backward cells    'out': tf.Variable(tf.random_normal([2 * num_hidden, num_classes]))}biases = {    'out': tf.Variable(tf.random_normal([num_classes]))}def BiRNN(x, weights, biases):    # Prepare data shape to match `rnn` function requirements    # Current data input shape: (batch_size, timesteps, n_input)    # Required shape: 'timesteps' tensors list of shape (batch_size, num_input)    # Unstack to get a list of 'timesteps' tensors of shape (batch_size, num_input)    x = tf.unstack(x, timesteps, 1)        # Define lstm cells with tensorflow    # Forward direction cell    lstm_fw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)    # Backward direction cell    lstm_bw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)    # Get lstm cell output    try:        outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,                                                     dtype=tf.float32)    except Exception:  # Old TensorFlow version only returns outputs not states        outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,                                               dtype=tf.float32)    # Linear activation, using rnn inner loop last output    return tf.matmul(outputs[-1], weights['out']) + biases['out']logits = BiRNN(X, weights, biases)prediction = tf.nn.softmax(logits)# Define loss and optimizerloss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(    logits=logits, labels=Y))optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)train_op = optimizer.minimize(loss_op)# Evaluate model (with test logits, for dropout to be disabled)correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))# Initialize the variables (i.e. assign their default value)init = tf.global_variables_initializer()# Start trainingwith tf.Session() as sess:    # Run the initializer    sess.run(init)    for step in range(1, training_steps + 1):        batch_x, batch_y = mnist.train.next_batch(batch_size)        # Reshape data to get 28 seq of 28 elements        batch_x = batch_x.reshape((batch_size, timesteps, num_input))        # Run optimization op (backprop)        sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})        if step % display_step == 0 or step == 1:            # Calculate batch loss and accuracy            loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,                                                                 Y: batch_y})            print("Step " + str(step) + ", Minibatch Loss= " + \                  "{:.4f}".format(loss) + ", Training Accuracy= " + \                  "{:.3f}".format(acc))    print("Optimization Finished!")    # Calculate accuracy for 128 mnist test images    test_len = 128    test_data = mnist.test.images[:test_len].reshape((-1, timesteps, num_input))    test_label = mnist.test.labels[:test_len]    print("Testing Accuracy:", \          sess.run(accuracy, feed_dict={X: test_data, Y: test_label}))

(1)BasicLSTMCell:__init__(self, num_units, forget_bias=1.0, state_is_tuple=True, activation=None, 

reuse=None)。其中,num_units: int, The number of units in the LSTM cell。forget_bias: float, The bias added to 

forget gates.

(2)static_bidirectional_rnn:static_bidirectional_rnn(cell_fw, cell_bw, inputs, initial_state_fw=None, 

initial_state_bw=None, dtype=None, sequence_length=None, scope=None)。其中,inputs: A length T list of inputs, 

each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.


2. Recurrent Neural Network (LSTM) 

解析:Build a recurrent neural network (LSTM) to classify MNIST digits dataset.

from __future__ import print_functionimport tensorflow as tffrom tensorflow.contrib import rnn# Import MNIST datafrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("data/", one_hot=True)# Training Parameterslearning_rate = 0.001# training_steps = 10000training_steps = 200batch_size = 128display_step = 200# Network Parametersnum_input = 28  # MNIST data input (img shape: 28*28)timesteps = 28  # timestepsnum_hidden = 128  # hidden layer num of featuresnum_classes = 10  # MNIST total classes (0-9 digits)# tf Graph inputX = tf.placeholder("float", [None, timesteps, num_input])Y = tf.placeholder("float", [None, num_classes])# Define weightsweights = {    'out': tf.Variable(tf.random_normal([num_hidden, num_classes]))}biases = {    'out': tf.Variable(tf.random_normal([num_classes]))}def RNN(x, weights, biases):    # Prepare data shape to match `rnn` function requirements    # Current data input shape: (batch_size, timesteps, n_input)    # Required shape: 'timesteps' tensors list of shape (batch_size, n_input)    # Unstack to get a list of 'timesteps' tensors of shape (batch_size, n_input)    x = tf.unstack(x, timesteps, 1)    # Define a lstm cell with tensorflow    lstm_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)    # Get lstm cell output    outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)    # Linear activation, using rnn inner loop last output    return tf.matmul(outputs[-1], weights['out']) + biases['out']logits = RNN(X, weights, biases)prediction = tf.nn.softmax(logits)# Define loss and optimizerloss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(    logits=logits, labels=Y))optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)train_op = optimizer.minimize(loss_op)# Evaluate model (with test logits, for dropout to be disabled)correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))# Initialize the variables (i.e. assign their default value)init = tf.global_variables_initializer()# Start trainingwith tf.Session() as sess:    # Run the initializer    sess.run(init)    for step in range(1, training_steps + 1):        batch_x, batch_y = mnist.train.next_batch(batch_size)        # Reshape data to get 28 seq of 28 elements        batch_x = batch_x.reshape((batch_size, timesteps, num_input))        # Run optimization op (backprop)        sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})        if step % display_step == 0 or step == 1:            # Calculate batch loss and accuracy            loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,                                                                 Y: batch_y})            print("Step " + str(step) + ", Minibatch Loss= " + \                  "{:.4f}".format(loss) + ", Training Accuracy= " + \                  "{:.3f}".format(acc))    print("Optimization Finished!")    # Calculate accuracy for 128 mnist test images    test_len = 128    test_data = mnist.test.images[:test_len].reshape((-1, timesteps, num_input))    test_label = mnist.test.labels[:test_len]    print("Testing Accuracy:", \          sess.run(accuracy, feed_dict={X: test_data, Y: test_label}))
(1)static_rnn:static_rnn(cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)。

其中,inputs: A length T list of inputs, each a Tensor of shape [batch_size, input_size], or a nested tuple of such 

elements。sequence_length: Specifies the length of each sequence in inputs.


3. Dynamic Recurrent Neural Network (LSTM) 

解析:Build a recurrent neural network (LSTM) that performs dynamic calculation to classify sequences of different 

length.

from __future__ import print_functionimport tensorflow as tfimport random#  TOY DATA GENERATORclass ToySequenceData(object):    """ Generate sequence of data with dynamic length. The dynamic calculation will        be perform thanks to seqlen attribute that records every actual sequence length.    """    def __init__(self, n_samples=1000, max_seq_len=20, min_seq_len=3,                 max_value=1000):        self.data = []        self.labels = []        self.seqlen = []        for i in range(n_samples):            # Random sequence length            len = random.randint(min_seq_len, max_seq_len)            # Monitor sequence length for TensorFlow dynamic calculation            self.seqlen.append(len)            # Add a random or linear int sequence (50% prob)            if random.random() < .5:                # Generate a linear sequence                rand_start = random.randint(0, max_value - len)                s = [[float(i) / max_value] for i in                     range(rand_start, rand_start + len)]                # Pad sequence for dimension consistency                s += [[0.] for i in range(max_seq_len - len)]                self.data.append(s)                self.labels.append([1., 0.])            else:                # Generate a random sequence                s = [[float(random.randint(0, max_value)) / max_value]                     for i in range(len)]                # Pad sequence for dimension consistency                s += [[0.] for i in range(max_seq_len - len)]                self.data.append(s)                self.labels.append([0., 1.])        self.batch_id = 0    def next(self, batch_size):        """ Return a batch of data. When dataset end is reached, start over.        """        if self.batch_id == len(self.data):            self.batch_id = 0        batch_data = (self.data[self.batch_id:min(self.batch_id +                                                  batch_size, len(self.data))])        batch_labels = (self.labels[self.batch_id:min(self.batch_id +                                                      batch_size, len(self.data))])        batch_seqlen = (self.seqlen[self.batch_id:min(self.batch_id +                                                      batch_size, len(self.data))])        self.batch_id = min(self.batch_id + batch_size, len(self.data))        return batch_data, batch_labels, batch_seqlen# Parameterslearning_rate = 0.01# training_steps = 10000training_steps = 200batch_size = 128display_step = 200# Network Parametersseq_max_len = 20  # Sequence max lengthn_hidden = 64  # hidden layer num of featuresn_classes = 2  # linear sequence or nottrainset = ToySequenceData(n_samples=1000, max_seq_len=seq_max_len)testset = ToySequenceData(n_samples=500, max_seq_len=seq_max_len)# tf Graph inputx = tf.placeholder("float", [None, seq_max_len, 1])y = tf.placeholder("float", [None, n_classes])# A placeholder for indicating each sequence lengthseqlen = tf.placeholder(tf.int32, [None])# Define weightsweights = {    'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))}biases = {    'out': tf.Variable(tf.random_normal([n_classes]))}def dynamicRNN(x, seqlen, weights, biases):    # Prepare data shape to match `rnn` function requirements    # Current data input shape: (batch_size, n_steps, n_input)    # Required shape: 'n_steps' tensors list of shape (batch_size, n_input)    # Unstack to get a list of 'n_steps' tensors of shape (batch_size, n_input)    x = tf.unstack(x, seq_max_len, 1)    # Define a lstm cell with tensorflow    lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden)    # Get lstm cell output, providing 'sequence_length' will perform dynamic    # calculation.    outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x, dtype=tf.float32,                                                sequence_length=seqlen)    # 'outputs' is a list of output at every timestep, we pack them in a Tensor    # and change back dimension to [batch_size, n_step, n_input]    outputs = tf.stack(outputs)    outputs = tf.transpose(outputs, [1, 0, 2])    # Hack to build the indexing and retrieve the right output.    batch_size = tf.shape(outputs)[0]    # Start indices for each sample    index = tf.range(0, batch_size) * seq_max_len + (seqlen - 1)    # Indexing    outputs = tf.gather(tf.reshape(outputs, [-1, n_hidden]), index)        # Linear activation, using outputs computed above    return tf.matmul(outputs, weights['out']) + biases['out']pred = dynamicRNN(x, seqlen, weights, biases)# Define loss and optimizercost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)# Evaluate modelcorrect_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))# Initialize the variables (i.e. assign their default value)init = tf.global_variables_initializer()# Start trainingwith tf.Session() as sess:    # Run the initializer    sess.run(init)    for step in range(1, training_steps + 1):        batch_x, batch_y, batch_seqlen = trainset.next(batch_size)        # Run optimization op (backprop)        sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,                                       seqlen: batch_seqlen})        if step % display_step == 0 or step == 1:            # Calculate batch accuracy            acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y,                                                seqlen: batch_seqlen})            # Calculate batch loss            loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y,                                             seqlen: batch_seqlen})            print("Step " + str(step * batch_size) + ", Minibatch Loss= " + \                  "{:.6f}".format(loss) + ", Training Accuracy= " + \                  "{:.5f}".format(acc))    print("Optimization Finished!")    # Calculate accuracy    test_data = testset.data    test_label = testset.labels    test_seqlen = testset.seqlen    print("Testing Accuracy:", \          sess.run(accuracy, feed_dict={x: test_data, y: test_label,                                        seqlen: test_seqlen}))

(1)tf.contrib.rnn.static_rnn函数的返回值outputs是n_steps个[batch_size, n_hidden]形状的张量。

(2)tf.transpose(outputs, [1, 0, 2])函数的返回值outputs是[batch_size, n_steps, n_hidden]。

(3)tf.gather(params, indices, validate_indices=None, name=None):按照指定的下标集合从axis=0中抽取子集,

适合抽取不连续区域的子集。


4. SIMD单指令多数据

解析:

(1)MMX支持8个64bit的寄存器进行SIMD操作。

(2)SSE支持8个128bit的寄存器进行SIMD操作。

(3)AVX支持256bit的SIMD操作。

(4)FMA是AVX扩充指令集。


5. SIMD数据类型

解析:

(1)__m64:64位紧缩整数(MMX)。

(2)__m128:128位紧缩单精度(SSE)。

(3)__m128d:128位紧缩双精度(SSE2)。

(4)__m128i:128位紧缩整数(SSE2)。

(5)__m256:256位紧缩单精度(AVX)。

(6)__m256d:256位紧缩双精度(AVX)。

(7)__m256i:256位紧缩整数(AVX)。


6. SIMD数据类型与寄存器关系

解析:

(1)64位MM寄存器(MM0~MM7):__m64。

(2)128位SSE寄存器(XMM0~XMM15):__m128、__m128d、__m128i。

(3)256位AVX寄存器(YMM0~YMM15):__m256、__m256d、__m256i。


7. tf.random_normal

解析:tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None):从正态分布中

输出随机值。


8. tf.truncated_normal

解析:tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None):从截断的正

态分布中输出随机值。生成的值服从具有指定平均值和标准偏差的正态分布,如果生成的值大于平均值2个标准偏差

的值则丢弃重新选择。


9. eval

解析:Tensor的一个方法,返回Tensor的值。触发任意一个图计算都需要计算出这个值。只能在一个已经启动的会话

的图中才能调用该Tensor值。


10. tf.unstack

解析:unstack(value, num=None, axis=0, name="unstack") 

(1)num: An 'int'. The length of the dimension 'axis'. Automatically inferred if 'None' (the default).
(2)axis: An 'int'. The axis to unstack along. Defaults to the first dimension. Supports negative indexes.

举个例子,如下所示:

a = tf.constant([[3, 2], [4, 5]])b = tf.constant([[1, 6], [7, 8]])e = tf.unstack([a, b], axis=0)f = tf.unstack([a, b], axis=1)with tf.Session() as sess:    print(sess.run(e))    print(sess.run(f))
结果输出,如下所示:
[array([[3, 2],       [4, 5]], dtype=int32),  array([[1, 6],       [7, 8]], dtype=int32)][array([[3, 2],       [1, 6]], dtype=int32), array([[4, 5],       [7, 8]], dtype=int32)]


参考文献:

[1] TensorFlow-Examples:https://github.com/aymericdamien/TensorFlow-Examples

[2] TensorFlow从一个二分类问题看RNN的结构:http://blog.csdn.net/u010041824/article/details/69290249

原创粉丝点击