tensorflow安装与测试

来源:互联网 发布:mac app store更新不了 编辑:程序博客网 时间:2024/05/16 10:42

环境:虚拟机安装的ubuntu16.04 LTS
这里写图片描述
参考官网 网址
**
检查系统自带Python版本:

xiaokai@ubuntu:~$ python3Python 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609] on linuxType "help", "copyright", "credits" or "license" for more information.>>> 

使用pip安装,pip是一个包管理系统,用于安装和管理Python软件包,注意我们的Python版本是Python3+,所有要安装对应的pip3.

xiaokai@ubuntu:~$ sudo apt-get install python3-pip python-devReading package lists... DoneBuilding dependency tree       Reading state information... Donepython-dev is already the newest version (2.7.11-1).python3-pip is already the newest version (8.1.1-2ubuntu0.4).0 upgraded, 0 newly installed, 0 to remove and 292 not upgraded.

我这里已经安装过了,安装比较简单一路yes和enter

接着选择对应的tensorflow二进制文件安装,这里是虚拟机没有GPU选择CPU only版本# Ubuntu/Linux 64-bit, CPU only, Python 3.5
关联下载地址

xiaokai@ubuntu:~$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0-cp35-cp35m-linux_x86_64.whl

下载安装:

xiaokai@ubuntu:~$ sudo pip3 install --upgrade $TF_BINARY_URLThe directory '/home/xiaokai/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.The directory '/home/xiaokai/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.You must give at least one requirement to install (see "pip help install")

当然我这里也是安装过了,下载如果遇到网速慢需要重复安装。。。

这里看到是3.5.2版本

测试过程:

查看安装路径

xiaokai@ubuntu:~$ python3 -c 'import os; import inspect; import tensorflow; print(os.path.dirname(inspect.getfile(tensorflow)))'/usr/local/lib/python3.5/dist-packages/tensorflow

进入该目录下执行自带测试用例

查看输出:

xiaokai@ubuntu:~$ cd /usr/local/lib/python3.5/dist-packages/tensorflowxiaokai@ubuntu:/usr/local/lib/python3.5/dist-packages/tensorflow$ sudo python3 -m tensorflow.models.image.mnist.convolutional

训练过程:

Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.Extracting data/train-images-idx3-ubyte.gzExtracting data/train-labels-idx1-ubyte.gzExtracting data/t10k-images-idx3-ubyte.gzExtracting data/t10k-labels-idx1-ubyte.gzInitialized!Step 0 (epoch 0.00), 11.4 msMinibatch loss: 12.053, learning rate: 0.010000Minibatch error: 90.6%Validation error: 84.6%Step 100 (epoch 0.12), 410.2 msMinibatch loss: 3.276, learning rate: 0.010000Minibatch error: 6.2%Validation error: 7.2%Step 200 (epoch 0.23), 402.6 msMinibatch loss: 3.457, learning rate: 0.010000Minibatch error: 14.1%Validation error: 3.9%Step 300 (epoch 0.35), 418.1 msMinibatch loss: 3.204, learning rate: 0.010000Minibatch error: 6.2%Validation error: 3.1%Step 400 (epoch 0.47), 427.7 msMinibatch loss: 3.211, learning rate: 0.010000Minibatch error: 9.4%Validation error: 2.5%Step 500 (epoch 0.58), 444.4 msMinibatch loss: 3.295, learning rate: 0.010000Minibatch error: 7.8%Validation error: 2.6%Step 600 (epoch 0.70), 406.3 msMinibatch loss: 3.213, learning rate: 0.010000Minibatch error: 6.2%Validation error: 2.6%Step 700 (epoch 0.81), 406.1 msMinibatch loss: 3.042, learning rate: 0.010000Minibatch error: 3.1%Validation error: 2.2%Step 800 (epoch 0.93), 438.1 msMinibatch loss: 3.086, learning rate: 0.010000Minibatch error: 6.2%Validation error: 2.0%Step 900 (epoch 1.05), 413.5 msMinibatch loss: 2.942, learning rate: 0.009500Minibatch error: 3.1%Validation error: 1.7%Step 1000 (epoch 1.16), 421.4 msMinibatch loss: 2.846, learning rate: 0.009500Minibatch error: 0.0%Validation error: 1.8%Step 1100 (epoch 1.28), 430.7 msMinibatch loss: 2.812, learning rate: 0.009500Minibatch error: 0.0%Validation error: 1.6%Step 1200 (epoch 1.40), 426.2 msMinibatch loss: 2.891, learning rate: 0.009500Minibatch error: 7.8%Validation error: 1.5%Step 1300 (epoch 1.51), 415.1 msMinibatch loss: 2.763, learning rate: 0.009500Minibatch error: 0.0%Validation error: 1.7%Step 1400 (epoch 1.63), 418.8 msMinibatch loss: 2.774, learning rate: 0.009500Minibatch error: 3.1%Validation error: 1.5%Step 1500 (epoch 1.75), 421.5 msMinibatch loss: 2.889, learning rate: 0.009500Minibatch error: 7.8%Validation error: 1.3%Step 1600 (epoch 1.86), 434.5 msMinibatch loss: 2.692, learning rate: 0.009500Minibatch error: 0.0%Validation error: 1.2%Step 1700 (epoch 1.98), 418.3 msMinibatch loss: 2.651, learning rate: 0.009500Minibatch error: 0.0%Validation error: 1.4%Step 1800 (epoch 2.09), 442.3 msMinibatch loss: 2.662, learning rate: 0.009025Minibatch error: 1.6%Validation error: 1.3%Step 1900 (epoch 2.21), 410.7 msMinibatch loss: 2.644, learning rate: 0.009025Minibatch error: 1.6%Validation error: 1.2%Step 2000 (epoch 2.33), 409.9 msMinibatch loss: 2.662, learning rate: 0.009025Minibatch error: 3.1%Validation error: 1.2%Step 2100 (epoch 2.44), 420.5 msMinibatch loss: 2.576, learning rate: 0.009025Minibatch error: 0.0%Validation error: 1.1%Step 2200 (epoch 2.56), 408.5 msMinibatch loss: 2.586, learning rate: 0.009025Minibatch error: 1.6%Validation error: 1.1%Step 2300 (epoch 2.68), 409.2 msMinibatch loss: 2.567, learning rate: 0.009025Minibatch error: 1.6%Validation error: 1.0%Step 2400 (epoch 2.79), 413.3 msMinibatch loss: 2.497, learning rate: 0.009025Minibatch error: 0.0%Validation error: 1.1%Step 2500 (epoch 2.91), 410.8 msMinibatch loss: 2.481, learning rate: 0.009025Minibatch error: 0.0%Validation error: 1.1%Step 2600 (epoch 3.03), 413.5 msMinibatch loss: 2.461, learning rate: 0.008574Minibatch error: 0.0%Validation error: 1.1%Step 2700 (epoch 3.14), 409.5 msMinibatch loss: 2.500, learning rate: 0.008574Minibatch error: 1.6%Validation error: 1.0%Step 2800 (epoch 3.26), 405.2 msMinibatch loss: 2.416, learning rate: 0.008574Minibatch error: 0.0%Validation error: 1.0%Step 2900 (epoch 3.37), 405.9 msMinibatch loss: 2.457, learning rate: 0.008574Minibatch error: 3.1%Validation error: 1.2%Step 3000 (epoch 3.49), 411.0 msMinibatch loss: 2.393, learning rate: 0.008574Minibatch error: 0.0%Validation error: 1.1%Step 3100 (epoch 3.61), 410.5 msMinibatch loss: 2.412, learning rate: 0.008574Minibatch error: 4.7%Validation error: 1.1%Step 3200 (epoch 3.72), 410.0 msMinibatch loss: 2.345, learning rate: 0.008574Minibatch error: 0.0%Validation error: 1.1%Step 3300 (epoch 3.84), 407.8 msMinibatch loss: 2.336, learning rate: 0.008574Minibatch error: 1.6%Validation error: 1.1%Step 3400 (epoch 3.96), 409.4 msMinibatch loss: 2.303, learning rate: 0.008574Minibatch error: 0.0%Validation error: 1.0%Step 3500 (epoch 4.07), 405.7 msMinibatch loss: 2.273, learning rate: 0.008145Minibatch error: 0.0%Validation error: 1.1%Step 3600 (epoch 4.19), 404.5 msMinibatch loss: 2.257, learning rate: 0.008145Minibatch error: 0.0%Validation error: 1.0%Step 3700 (epoch 4.31), 403.0 msMinibatch loss: 2.243, learning rate: 0.008145Minibatch error: 0.0%Validation error: 0.9%Step 3800 (epoch 4.42), 401.4 msMinibatch loss: 2.230, learning rate: 0.008145Minibatch error: 0.0%Validation error: 1.0%Step 3900 (epoch 4.54), 402.4 msMinibatch loss: 2.319, learning rate: 0.008145Minibatch error: 3.1%Validation error: 1.0%Step 4000 (epoch 4.65), 401.4 msMinibatch loss: 2.206, learning rate: 0.008145Minibatch error: 0.0%Validation error: 1.1%Step 4100 (epoch 4.77), 403.0 msMinibatch loss: 2.176, learning rate: 0.008145Minibatch error: 1.6%Validation error: 0.8%Step 4200 (epoch 4.89), 401.4 msMinibatch loss: 2.255, learning rate: 0.008145Minibatch error: 3.1%Validation error: 1.0%Step 4300 (epoch 5.00), 401.9 msMinibatch loss: 2.181, learning rate: 0.007738Minibatch error: 3.1%Validation error: 0.9%Step 4400 (epoch 5.12), 401.4 msMinibatch loss: 2.147, learning rate: 0.007738Minibatch error: 1.6%Validation error: 0.9%Step 4500 (epoch 5.24), 402.3 msMinibatch loss: 2.177, learning rate: 0.007738Minibatch error: 6.2%Validation error: 0.9%Step 4600 (epoch 5.35), 403.1 msMinibatch loss: 2.095, learning rate: 0.007738Minibatch error: 0.0%Validation error: 0.9%Step 4700 (epoch 5.47), 404.1 msMinibatch loss: 2.083, learning rate: 0.007738Minibatch error: 1.6%Validation error: 0.9%Step 4800 (epoch 5.59), 400.0 msMinibatch loss: 2.062, learning rate: 0.007738Minibatch error: 0.0%Validation error: 0.9%Step 4900 (epoch 5.70), 404.0 msMinibatch loss: 2.055, learning rate: 0.007738Minibatch error: 1.6%Validation error: 1.1%Step 5000 (epoch 5.82), 402.1 msMinibatch loss: 2.149, learning rate: 0.007738Minibatch error: 3.1%Validation error: 1.0%Step 5100 (epoch 5.93), 401.5 msMinibatch loss: 2.004, learning rate: 0.007738Minibatch error: 0.0%Validation error: 1.0%Step 5200 (epoch 6.05), 401.7 msMinibatch loss: 2.100, learning rate: 0.007351Minibatch error: 4.7%Validation error: 1.0%Step 5300 (epoch 6.17), 401.1 msMinibatch loss: 1.983, learning rate: 0.007351Minibatch error: 0.0%Validation error: 1.0%Step 5400 (epoch 6.28), 401.4 msMinibatch loss: 1.960, learning rate: 0.007351Minibatch error: 0.0%Validation error: 0.8%Step 5500 (epoch 6.40), 399.1 msMinibatch loss: 1.957, learning rate: 0.007351Minibatch error: 1.6%Validation error: 0.9%Step 5600 (epoch 6.52), 399.7 msMinibatch loss: 1.927, learning rate: 0.007351Minibatch error: 0.0%Validation error: 0.8%Step 5700 (epoch 6.63), 408.4 msMinibatch loss: 1.915, learning rate: 0.007351Minibatch error: 0.0%Validation error: 1.0%Step 5800 (epoch 6.75), 397.0 msMinibatch loss: 1.898, learning rate: 0.007351Minibatch error: 0.0%Validation error: 0.8%Step 5900 (epoch 6.87), 394.8 msMinibatch loss: 1.888, learning rate: 0.007351Minibatch error: 0.0%Validation error: 0.9%Step 6000 (epoch 6.98), 394.4 msMinibatch loss: 1.897, learning rate: 0.007351Minibatch error: 1.6%Validation error: 0.9%Step 6100 (epoch 7.10), 395.5 msMinibatch loss: 1.858, learning rate: 0.006983Minibatch error: 0.0%Validation error: 0.8%Step 6200 (epoch 7.21), 395.7 msMinibatch loss: 1.844, learning rate: 0.006983Minibatch error: 0.0%Validation error: 0.8%Step 6300 (epoch 7.33), 396.1 msMinibatch loss: 1.847, learning rate: 0.006983Minibatch error: 1.6%Validation error: 0.9%Step 6400 (epoch 7.45), 396.1 msMinibatch loss: 1.888, learning rate: 0.006983Minibatch error: 3.1%Validation error: 0.8%Step 6500 (epoch 7.56), 398.8 msMinibatch loss: 1.807, learning rate: 0.006983Minibatch error: 0.0%Validation error: 0.8%Step 6600 (epoch 7.68), 409.9 msMinibatch loss: 1.832, learning rate: 0.006983Minibatch error: 1.6%Validation error: 0.9%Step 6700 (epoch 7.80), 403.4 msMinibatch loss: 1.783, learning rate: 0.006983Minibatch error: 0.0%Validation error: 0.7%Step 6800 (epoch 7.91), 409.7 msMinibatch loss: 1.775, learning rate: 0.006983Minibatch error: 0.0%Validation error: 0.9%Step 6900 (epoch 8.03), 411.8 msMinibatch loss: 1.762, learning rate: 0.006634Minibatch error: 0.0%Validation error: 0.9%Step 7000 (epoch 8.15), 415.8 msMinibatch loss: 1.793, learning rate: 0.006634Minibatch error: 1.6%Validation error: 0.9%Step 7100 (epoch 8.26), 406.4 msMinibatch loss: 1.743, learning rate: 0.006634Minibatch error: 0.0%Validation error: 0.8%Step 7200 (epoch 8.38), 414.7 msMinibatch loss: 1.741, learning rate: 0.006634Minibatch error: 0.0%Validation error: 0.7%Step 7300 (epoch 8.49), 422.6 msMinibatch loss: 1.749, learning rate: 0.006634Minibatch error: 3.1%Validation error: 0.8%Step 7400 (epoch 8.61), 434.4 msMinibatch loss: 1.701, learning rate: 0.006634Minibatch error: 0.0%Validation error: 0.8%Step 7500 (epoch 8.73), 425.4 msMinibatch loss: 1.698, learning rate: 0.006634Minibatch error: 0.0%Validation error: 0.9%Step 7600 (epoch 8.84), 441.4 msMinibatch loss: 1.784, learning rate: 0.006634Minibatch error: 1.6%Validation error: 0.7%Step 7700 (epoch 8.96), 423.8 msMinibatch loss: 1.667, learning rate: 0.006634Minibatch error: 0.0%Validation error: 0.9%Step 7800 (epoch 9.08), 425.8 msMinibatch loss: 1.664, learning rate: 0.006302Minibatch error: 0.0%Validation error: 0.8%Step 7900 (epoch 9.19), 419.6 msMinibatch loss: 1.651, learning rate: 0.006302Minibatch error: 0.0%Validation error: 0.9%Step 8000 (epoch 9.31), 421.5 msMinibatch loss: 1.660, learning rate: 0.006302Minibatch error: 0.0%Validation error: 0.8%Step 8100 (epoch 9.43), 431.6 msMinibatch loss: 1.628, learning rate: 0.006302Minibatch error: 0.0%Validation error: 0.8%Step 8200 (epoch 9.54), 435.2 msMinibatch loss: 1.620, learning rate: 0.006302Minibatch error: 0.0%Validation error: 0.8%Step 8300 (epoch 9.66), 423.1 msMinibatch loss: 1.609, learning rate: 0.006302Minibatch error: 0.0%Validation error: 0.7%Step 8400 (epoch 9.77), 423.6 msMinibatch loss: 1.596, learning rate: 0.006302Minibatch error: 0.0%Validation error: 0.7%Step 8500 (epoch 9.89), 438.9 msMinibatch loss: 1.599, learning rate: 0.006302Minibatch error: 0.0%Validation error: 0.8%Test error: 0.8%

查看convolutional.py源码:

# Copyright 2015 The TensorFlow Authors. All Rights Reserved.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# =============================================================================="""Simple, end-to-end, LeNet-5-like convolutional MNIST model example.This should achieve a test error of 0.7%. Please keep this model as simple andlinear as possible, it is meant as a tutorial for simple convolutional models.Run with --self_test on the command line to execute a short self-test."""from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionimport gzipimport osimport sysimport timeimport numpyfrom six.moves import urllibfrom six.moves import xrange  # pylint: disable=redefined-builtinimport tensorflow as tfSOURCE_URL = 'http://yann.lecun.com/exdb/mnist/'WORK_DIRECTORY = 'data'IMAGE_SIZE = 28NUM_CHANNELS = 1PIXEL_DEPTH = 255NUM_LABELS = 10VALIDATION_SIZE = 5000  # Size of the validation set.SEED = 66478  # Set to None for random seed.BATCH_SIZE = 64NUM_EPOCHS = 10EVAL_BATCH_SIZE = 64EVAL_FREQUENCY = 100  # Number of steps between evaluations.tf.app.flags.DEFINE_boolean("self_test", False, "True if running a self test.")tf.app.flags.DEFINE_boolean('use_fp16', False,                            "Use half floats instead of full floats if True.")FLAGS = tf.app.flags.FLAGSdef data_type():  """Return the type of the activations, weights, and placeholder variables."""  if FLAGS.use_fp16:    return tf.float16  else:    return tf.float32def maybe_download(filename):  """Download the data from Yann's website, unless it's already here."""  if not tf.gfile.Exists(WORK_DIRECTORY):    tf.gfile.MakeDirs(WORK_DIRECTORY)  filepath = os.path.join(WORK_DIRECTORY, filename)  if not tf.gfile.Exists(filepath):    filepath, _ = urllib.request.urlretrieve(SOURCE_URL + filename, filepath)    with tf.gfile.GFile(filepath) as f:      size = f.size()    print('Successfully downloaded', filename, size, 'bytes.')  return filepathdef extract_data(filename, num_images):  """Extract the images into a 4D tensor [image index, y, x, channels].  Values are rescaled from [0, 255] down to [-0.5, 0.5].  """  print('Extracting', filename)  with gzip.open(filename) as bytestream:    bytestream.read(16)    buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images * NUM_CHANNELS)    data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)    data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH    data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS)    return datadef extract_labels(filename, num_images):  """Extract the labels into a vector of int64 label IDs."""  print('Extracting', filename)  with gzip.open(filename) as bytestream:    bytestream.read(8)    buf = bytestream.read(1 * num_images)    labels = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.int64)  return labelsdef fake_data(num_images):  """Generate a fake dataset that matches the dimensions of MNIST."""  data = numpy.ndarray(      shape=(num_images, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS),      dtype=numpy.float32)  labels = numpy.zeros(shape=(num_images,), dtype=numpy.int64)  for image in xrange(num_images):    label = image % 2    data[image, :, :, 0] = label - 0.5    labels[image] = label  return data, labelsdef error_rate(predictions, labels):  """Return the error rate based on dense predictions and sparse labels."""  return 100.0 - (      100.0 *      numpy.sum(numpy.argmax(predictions, 1) == labels) /      predictions.shape[0])def main(argv=None):  # pylint: disable=unused-argument  if FLAGS.self_test:    print('Running self-test.')    train_data, train_labels = fake_data(256)    validation_data, validation_labels = fake_data(EVAL_BATCH_SIZE)    test_data, test_labels = fake_data(EVAL_BATCH_SIZE)    num_epochs = 1  else:    # Get the data.    train_data_filename = maybe_download('train-images-idx3-ubyte.gz')    train_labels_filename = maybe_download('train-labels-idx1-ubyte.gz')    test_data_filename = maybe_download('t10k-images-idx3-ubyte.gz')    test_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz')    # Extract it into numpy arrays.    train_data = extract_data(train_data_filename, 60000)    train_labels = extract_labels(train_labels_filename, 60000)    test_data = extract_data(test_data_filename, 10000)    test_labels = extract_labels(test_labels_filename, 10000)    # Generate a validation set.    validation_data = train_data[:VALIDATION_SIZE, ...]    validation_labels = train_labels[:VALIDATION_SIZE]    train_data = train_data[VALIDATION_SIZE:, ...]    train_labels = train_labels[VALIDATION_SIZE:]    num_epochs = NUM_EPOCHS  train_size = train_labels.shape[0]  # This is where training samples and labels are fed to the graph.  # These placeholder nodes will be fed a batch of training data at each  # training step using the {feed_dict} argument to the Run() call below.  train_data_node = tf.placeholder(      data_type(),      shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))  train_labels_node = tf.placeholder(tf.int64, shape=(BATCH_SIZE,))  eval_data = tf.placeholder(      data_type(),      shape=(EVAL_BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))  # The variables below hold all the trainable weights. They are passed an  # initial value which will be assigned when we call:  # {tf.initialize_all_variables().run()}  conv1_weights = tf.Variable(      tf.truncated_normal([5, 5, NUM_CHANNELS, 32],  # 5x5 filter, depth 32.                          stddev=0.1,                          seed=SEED, dtype=data_type()))  conv1_biases = tf.Variable(tf.zeros([32], dtype=data_type()))  conv2_weights = tf.Variable(tf.truncated_normal(      [5, 5, 32, 64], stddev=0.1,      seed=SEED, dtype=data_type()))  conv2_biases = tf.Variable(tf.constant(0.1, shape=[64], dtype=data_type()))  fc1_weights = tf.Variable(  # fully connected, depth 512.      tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],                          stddev=0.1,                          seed=SEED,                          dtype=data_type()))  fc1_biases = tf.Variable(tf.constant(0.1, shape=[512], dtype=data_type()))  fc2_weights = tf.Variable(tf.truncated_normal([512, NUM_LABELS],                                                stddev=0.1,                                                seed=SEED,                                                dtype=data_type()))  fc2_biases = tf.Variable(tf.constant(      0.1, shape=[NUM_LABELS], dtype=data_type()))  # We will replicate the model structure for the training subgraph, as well  # as the evaluation subgraphs, while sharing the trainable parameters.  def model(data, train=False):    """The Model definition."""    # 2D convolution, with 'SAME' padding (i.e. the output feature map has    # the same size as the input). Note that {strides} is a 4D array whose    # shape matches the data layout: [image index, y, x, depth].    conv = tf.nn.conv2d(data,                        conv1_weights,                        strides=[1, 1, 1, 1],                        padding='SAME')    # Bias and rectified linear non-linearity.    relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))    # Max pooling. The kernel size spec {ksize} also follows the layout of    # the data. Here we have a pooling window of 2, and a stride of 2.    pool = tf.nn.max_pool(relu,                          ksize=[1, 2, 2, 1],                          strides=[1, 2, 2, 1],                          padding='SAME')    conv = tf.nn.conv2d(pool,                        conv2_weights,                        strides=[1, 1, 1, 1],                        padding='SAME')    relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))    pool = tf.nn.max_pool(relu,                          ksize=[1, 2, 2, 1],                          strides=[1, 2, 2, 1],                          padding='SAME')    # Reshape the feature map cuboid into a 2D matrix to feed it to the    # fully connected layers.    pool_shape = pool.get_shape().as_list()    reshape = tf.reshape(        pool,        [pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])    # Fully connected layer. Note that the '+' operation automatically    # broadcasts the biases.    hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)    # Add a 50% dropout during training only. Dropout also scales    # activations such that no rescaling is needed at evaluation time.    if train:      hidden = tf.nn.dropout(hidden, 0.5, seed=SEED)    return tf.matmul(hidden, fc2_weights) + fc2_biases  # Training computation: logits + cross-entropy loss.  logits = model(train_data_node, True)  loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(      logits, train_labels_node))  # L2 regularization for the fully connected parameters.  regularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) +                  tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases))  # Add the regularization term to the loss.  loss += 5e-4 * regularizers  # Optimizer: set up a variable that's incremented once per batch and  # controls the learning rate decay.  batch = tf.Variable(0, dtype=data_type())  # Decay once per epoch, using an exponential schedule starting at 0.01.  learning_rate = tf.train.exponential_decay(      0.01,                # Base learning rate.      batch * BATCH_SIZE,  # Current index into the dataset.      train_size,          # Decay step.      0.95,                # Decay rate.      staircase=True)  # Use simple momentum for the optimization.  optimizer = tf.train.MomentumOptimizer(learning_rate,                                         0.9).minimize(loss,                                                       global_step=batch)  # Predictions for the current training minibatch.  train_prediction = tf.nn.softmax(logits)  # Predictions for the test and validation, which we'll compute less often.  eval_prediction = tf.nn.softmax(model(eval_data))  # Small utility function to evaluate a dataset by feeding batches of data to  # {eval_data} and pulling the results from {eval_predictions}.  # Saves memory and enables this to run on smaller GPUs.  def eval_in_batches(data, sess):    """Get all predictions for a dataset by running it in small batches."""    size = data.shape[0]    if size < EVAL_BATCH_SIZE:      raise ValueError("batch size for evals larger than dataset: %d" % size)    predictions = numpy.ndarray(shape=(size, NUM_LABELS), dtype=numpy.float32)    for begin in xrange(0, size, EVAL_BATCH_SIZE):      end = begin + EVAL_BATCH_SIZE      if end <= size:        predictions[begin:end, :] = sess.run(            eval_prediction,            feed_dict={eval_data: data[begin:end, ...]})      else:        batch_predictions = sess.run(            eval_prediction,            feed_dict={eval_data: data[-EVAL_BATCH_SIZE:, ...]})        predictions[begin:, :] = batch_predictions[begin - size:, :]    return predictions  # Create a local session to run the training.  start_time = time.time()  with tf.Session() as sess:    # Run all the initializers to prepare the trainable parameters.    tf.initialize_all_variables().run()    print('Initialized!')    # Loop through training steps.    for step in xrange(int(num_epochs * train_size) // BATCH_SIZE):      # Compute the offset of the current minibatch in the data.      # Note that we could use better randomization across epochs.      offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE)      batch_data = train_data[offset:(offset + BATCH_SIZE), ...]      batch_labels = train_labels[offset:(offset + BATCH_SIZE)]      # This dictionary maps the batch data (as a numpy array) to the      # node in the graph it should be fed to.      feed_dict = {train_data_node: batch_data,                   train_labels_node: batch_labels}      # Run the graph and fetch some of the nodes.      _, l, lr, predictions = sess.run(          [optimizer, loss, learning_rate, train_prediction],          feed_dict=feed_dict)      if step % EVAL_FREQUENCY == 0:        elapsed_time = time.time() - start_time        start_time = time.time()        print('Step %d (epoch %.2f), %.1f ms' %              (step, float(step) * BATCH_SIZE / train_size,               1000 * elapsed_time / EVAL_FREQUENCY))        print('Minibatch loss: %.3f, learning rate: %.6f' % (l, lr))        print('Minibatch error: %.1f%%' % error_rate(predictions, batch_labels))        print('Validation error: %.1f%%' % error_rate(            eval_in_batches(validation_data, sess), validation_labels))        sys.stdout.flush()    # Finally print the result!    test_error = error_rate(eval_in_batches(test_data, sess), test_labels)    print('Test error: %.1f%%' % test_error)    if FLAGS.self_test:      print('test_error', test_error)      assert test_error == 0.0, 'expected 0.0 test_error, got %.2f' % (          test_error,)if __name__ == '__main__':  tf.app.run()
1 0