基于Tensorflow的机器学习(6) -- 卷积神经网络

来源:互联网 发布:奔奔数码 淘宝 编辑:程序博客网 时间:2024/06/05 16:49

本篇博客将基于tensorflow的estimator以及MNIST实现LeNet。具体实现步骤如下:

导入必要内容

from __future__ import division, print_function, absolute_import# Import MNIST datafrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets('/tmp/data/', one_hot=False)# Why we set the one-hot to be false, what if we do not do it?import tensorflow as tfimport matplotlib.pyplot as pltimport numpy as np

上述代码值得注意的是使用到了python的内置函数__future__, 其是保证python2.7可以使用3.x相关功能的一种有效的实现手段.

变量配置

# Training parameterslearning_rate = 0.01num_steps = 2000batch_size = 128# Network Parametersnum_input = 784num_classes = 10dropout = 0.75

其中引入到了dropout, 在训练时使用dropout随机去掉一定比例的connection, 而在测试时不使用. 因此dropout可以通过Mode来进行切换.

定义神经网络

# Create the neural networkdef conv_net(x_dict, n_classes, dropout, reuse, is_training):    # Define a scope for reusing the variables.    with tf.variable_scope('ConvNet', reuse=reuse):        # TF Estimator input is a dict, is case of multiple inputs        x = x_dict['images']        # MNIST data input is a 1-D vector of 784 features (28*28 pixels)        # Reshape to match picture format [Height*Width*Channel]        # Tensor input become 4-D: [Batch Size, Height, Width, Channel]        x = tf.reshape(x, shape=[-1, 28, 28, 1])        # Convolution Layer with 32 filters and a kernel size of 5        conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)        # Max Pooling (down-sampling) with strides of 2 and kernel size of 2        conv1 = tf.layers.max_pooling2d(conv1, 2, 2)        # Convolution Layer with 64 filters and a kernel size of 3        conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)        # Max Pooling (down-sampling) with strides of 2 and kernel size of 2        conv2 = tf.layers.max_pooling2d(conv2, 2, 2)        # Flatten the data to a 1-D vector for the fully connected layer        fc1 = tf.contrib.layers.flatten(conv2)        # Fully connected layer (in tf contrib folder for now)        fc1 = tf.layers.dense(fc1, 1024)        # Apply Dropout (if is_training is False, dropout is not applied)        fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)        # Output layer, class prediction        out = tf.layers.dense(fc1, n_classes)    return out

以下内容需要说明:

  1. tf.variable.scope变量的reuse是控制该网络的变量是否能够被get_variable()函数调用的, 在训练模式之下,reuse应该被设置为True, 允许变量修改; 在测试时,reuse被设置为False.
  2. x = x_dict[‘images’]. 由于Estimator接收的输入为字典, 因此此处将x转换为字典类型.
  3. x = tf.reshape([x, shape=[-1, 28, 28, 1]]). 通过tf的reshape将该1维向量转变为4维向量, 其中每一维分别表示为 batch size, height, width, channel. 如果将其设置为-1, 则表示该位不指定, LeNet中不指定批大小,因此此处将其设置为-1.

定义模型函数

# Define the model function (following TF Estimator Template)def model_fn(features, labels, mode):    logits_train = conv_net(features, num_classes, dropout, reuse=False, is_training=True)    logits_test = conv_net(features, num_classes, dropout, reuse=True, is_training=False)    # Predictions    pred_classes = tf.argmax(logits_test, axis=1)    pred_probas = tf.nn.softmax(logits_test)    # If prediction mode, early return    if mode == tf.estimator.ModeKeys.PREDICT:        return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)    # Define loss and optimizer    loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(        logits=logits_train, labels=tf.cast(labels, dtype=tf.int32)))    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)    train_op = optimizer.minimize(loss_op, global_step=tf.train.get_global_step())    # Evaluate the accuracy of the model    acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)    # TF Estimator requires to return a EstimatorSpec, that specify    # the different ops for training, evaluating et al.    estim_specs = tf.estimator.EstimatorSpec(        mode=mode,        predictions=pred_classes,        loss=loss_op,        train_op=train_op,        eval_metric_ops={'accuracy' : acc_op})    return estim_specs    # Refer to the function estimator

以下内容值得注意与说明:

  1. logits_train 与 logits_test 分别表示的是训练模式和测试模式, 其中可以通过is_training来指定模式.

建立Estimator

# Build the Estimatormodel = tf.estimator.Estimator(model_fn)

定义输入函数并训练

# Define the input function for traininginput_fn = tf.estimator.inputs.numpy_input_fn(    x={'images' : mnist.train.images}, y=mnist.train.labels,    batch_size=batch_size, num_epochs=None, shuffle=True)# What is the shuffle and epochs really means?# Train the Modelmodel.train(input_fn, steps=num_steps)

由于X是一个字典, 因此需要选出mnist中的训练图片进行训练. 另外shuffle在训练时将其置为True, 当期进行测试时设置为False. 接着使用models.train进行模型训练.

模型评估

# Evaluate the model# Define the input function for evaluatinginput_fn = tf.estimator.inputs.numpy_input_fn(    x={'images': mnist.test.images}, y=mnist.test.labels,    batch_size=batch_size, shuffle=False)model.evaluate(input_fn)

使用测试数据集来进行测试, 然后使用model.evaluate进行模型评估, 使用的测试批为batch_size.

单图像测试

# Predict single imagesn_images = 4test_images = mnist.test.images[:n_images]input_fn = tf.estimator.inputs.numpy_input_fn(    x={'images': test_images}, shuffle=False)preds = list(model.predict(input_fn))# print(tf.estimator.inputs.numpy_input_fn(#     x={'images': test_images}, shuffle=False))# print(preds)

以上即是使用Tensorflow实现的卷积神经网络的全流程. 相对而言, 使用tensorflow还是十分简单清晰的. 后续将继续进行学习.

阅读全文
0 0
原创粉丝点击