tensorflow67 《深度学习原理与TensorFlow实战》04 CNN看懂世界 04深度残差网络

来源:互联网 发布:雾霾防激光知乎 编辑:程序博客网 时间:2024/04/29 06:12

00 环境

#《深度学习原理与TensorFlow实战》04 CNN看懂世界# 书源码地址:https://github.com/DeepVisionTeam/TensorFlowBook.git# 视频讲座地址:http://edu.csdn.net/course/detail/5222# win10 Tensorflow1.2.0 python3.6.1# CUDA v8.0 cudnn-8.0-windows10-x64-v5.1# https://github.com/tflearn/tflearn/blob/master/examples/images/residual_network_mnist.py# https://github.com/tflearn/tflearn/blob/master/examples/images/residual_network_cifar10.py

01 residual_network_mnist.py

# -*- coding: utf-8 -*-""" Deep Residual Network.Applying a Deep Residual Network to MNIST Dataset classification task.References:    - K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image      Recognition, 2015.    - Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. "Gradient-based      learning applied to document recognition." Proceedings of the IEEE,      86(11):2278-2324, November 1998.Links:    - [Deep Residual Network](http://arxiv.org/pdf/1512.03385.pdf)    - [MNIST Dataset](http://yann.lecun.com/exdb/mnist/)"""from __future__ import division, print_function, absolute_importimport tflearnimport tflearn.data_utils as du# Data loading and preprocessingimport tflearn.datasets.mnist as mnistX, Y, testX, testY = mnist.load_data(one_hot=True)X = X.reshape([-1, 28, 28, 1])testX = testX.reshape([-1, 28, 28, 1])X, mean = du.featurewise_zero_center(X)testX = du.featurewise_zero_center(testX, mean)# Building Residual Networknet = tflearn.input_data(shape=[None, 28, 28, 1])net = tflearn.conv_2d(net, 64, 3, activation='relu', bias=False)# Residual blocksnet = tflearn.residual_bottleneck(net, 3, 16, 64)net = tflearn.residual_bottleneck(net, 1, 32, 128, downsample=True)net = tflearn.residual_bottleneck(net, 2, 32, 128)net = tflearn.residual_bottleneck(net, 1, 64, 256, downsample=True)net = tflearn.residual_bottleneck(net, 2, 64, 256)net = tflearn.batch_normalization(net)net = tflearn.activation(net, 'relu')net = tflearn.global_avg_pool(net)# Regressionnet = tflearn.fully_connected(net, 10, activation='softmax')net = tflearn.regression(net, optimizer='momentum',                         loss='categorical_crossentropy',                         learning_rate=0.1)# Trainingmodel = tflearn.DNN(net, checkpoint_path='model_resnet_mnist',                    max_checkpoints=10, tensorboard_verbose=0)model.fit(X, Y, n_epoch=100, validation_set=(testX, testY),          show_metric=True, batch_size=256, run_id='resnet_mnist')'''Extracting mnist/train-images-idx3-ubyte.gzExtracting mnist/train-labels-idx1-ubyte.gzExtracting mnist/t10k-images-idx3-ubyte.gzExtracting mnist/t10k-labels-idx1-ubyte.gz---------------------------------Run id: resnet_mnistLog directory: /tmp/tflearn_logs/---------------------------------Training samples: 55000Validation samples: 10000--Training Step: 1  | time: 6.395s| Momentum | epoch: 001 | loss: 0.00000 - acc: 0.0000 -- iter: 00256/55000Training Step: 2  | total loss: 2.07091 | time: 11.896s| Momentum | epoch: 001 | loss: 2.07091 - acc: 0.0844 -- iter: 00512/55000Training Step: 3  | total loss: 2.25693 | time: 17.365s...Training Step: 242  | total loss: 0.07294 | time: 156.351s| Momentum | epoch: 002 | loss: 0.07294 - acc: 0.9785 -- iter: 06912/55000Training Step: 243  | total loss: 0.07384 | time: 162.524s| Momentum | epoch: 002 | loss: 0.07384 - acc: 0.9783 -- iter: 07168/55000Training Step: 244  | total loss: 0.07520 | time: 168.575s| Momentum | epoch: 002 | loss: 0.07520 - acc: 0.9789 -- iter: 07424/55000...'''

02 residual_network_cifar10.py

# -*- coding: utf-8 -*-""" Deep Residual Network.Applying a Deep Residual Network to CIFAR-10 Dataset classification task.References:    - K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image      Recognition, 2015.    - Learning Multiple Layers of Features from Tiny Images, A. Krizhevsky, 2009.Links:    - [Deep Residual Network](http://arxiv.org/pdf/1512.03385.pdf)    - [CIFAR-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html)"""from __future__ import division, print_function, absolute_importimport tflearn# Residual blocks# 32 layers: n=5, 56 layers: n=9, 110 layers: n=18n = 5# Data loadingfrom tflearn.datasets import cifar10(X, Y), (testX, testY) = cifar10.load_data()Y = tflearn.data_utils.to_categorical(Y, 10)testY = tflearn.data_utils.to_categorical(testY, 10)# Real-time data preprocessingimg_prep = tflearn.ImagePreprocessing()img_prep.add_featurewise_zero_center(per_channel=True)# Real-time data augmentationimg_aug = tflearn.ImageAugmentation()img_aug.add_random_flip_leftright()img_aug.add_random_crop([32, 32], padding=4)# Building Residual Networknet = tflearn.input_data(shape=[None, 32, 32, 3],                         data_preprocessing=img_prep,                         data_augmentation=img_aug)net = tflearn.conv_2d(net, 16, 3, regularizer='L2', weight_decay=0.0001)net = tflearn.residual_block(net, n, 16)net = tflearn.residual_block(net, 1, 32, downsample=True)net = tflearn.residual_block(net, n-1, 32)net = tflearn.residual_block(net, 1, 64, downsample=True)net = tflearn.residual_block(net, n-1, 64)net = tflearn.batch_normalization(net)net = tflearn.activation(net, 'relu')net = tflearn.global_avg_pool(net)# Regressionnet = tflearn.fully_connected(net, 10, activation='softmax')mom = tflearn.Momentum(0.1, lr_decay=0.1, decay_step=32000, staircase=True)net = tflearn.regression(net, optimizer=mom,                         loss='categorical_crossentropy')# Trainingmodel = tflearn.DNN(net, checkpoint_path='model_resnet_cifar10',                    max_checkpoints=10, tensorboard_verbose=0,                    clip_gradients=0.)model.fit(X, Y, n_epoch=200, validation_set=(testX, testY),          snapshot_epoch=False, snapshot_step=500,          show_metric=True, batch_size=128, shuffle=True,          run_id='resnet_cifar10')'''# 注意自解压目录,需要手动修改一下目录层次,cifar-10-batches-py==>cifar-10-batches-py/cifar-10-batches-py。---------------------------------Run id: resnet_cifar10Log directory: /tmp/tflearn_logs/---------------------------------Preprocessing... Calculating mean over all dataset (this may take long)...Mean: [ 0.49139968  0.48215841  0.44653091] (To avoid repetitive computation, add it to argument 'mean' of `add_featurewise_zero_center`)---------------------------------Training samples: 50000Validation samples: 10000--Training Step: 1  | time: 3.993s| Momentum | epoch: 001 | loss: 0.00000 - acc: 0.0000 -- iter: 00128/50000Training Step: 2  | total loss: 2.08295 | time: 7.453s| Momentum | epoch: 001 | loss: 2.08295 - acc: 0.0562 -- iter: 00256/50000Training Step: 3  | total loss: 2.26373 | time: 10.725s...Training Step: 121  | total loss: 1.72538 | time: 425.664s| Momentum | epoch: 001 | loss: 1.72538 - acc: 0.3421 -- iter: 15488/50000Training Step: 122  | total loss: 1.72287 | time: 429.461s| Momentum | epoch: 001 | loss: 1.72287 - acc: 0.3415 -- iter: 15616/50000Training Step: 123  | total loss: 1.71641 | time: 433.079s| Momentum | epoch: 001 | loss: 1.71641 - acc: 0.3448 -- iter: 15744/50000...'''
阅读全文
0 0