TensorFlow实现经典深度学习网络(4):TensorFlow实现ResNet

来源:互联网 发布:程序员转行产品经理 编辑:程序博客网 时间:2024/05/17 08:20

TensorFlow实现经典深度学习网络(4):TensorFlow实现ResNet


        ResNet(Residual Neural Network)——微软研究院何凯明团队提出的Residual Networks,其通过使用Residual Unit成功训练了152层深的神经网络,在ILSVRC 2015上大放异彩,获得第一名的成绩,取得3.57%的top-5错误率,效果非常突出。ResNet的结构可以极快地加速超深网络的训练,模型的准确率也有非常大的提升。而且Deep Residual Learning for Image Recognition(论文地址Paper)也获得了CVPR2016的best paper,实在是实至名归。本文将介绍ResNet的基本原理,以及TensorFlow如何实现它。

        ResNet最初的灵感出自这个问题:深度学习网络的深度对最后的分类和识别的效果有着很大的影响,所以正常想法就是能把网络设计的越深越好,但是事实上却不是这样,常规的网络的堆叠(plain network)在网络很深的时候,效果却越来越差了,即准确率会先上升然后达到饱和,在持续增加深度则会导致准确率下降。


ResNet残差网络:
• 核心组件Skip/shortcut connection
        • Plain net: 可以拟合出任意目标映射H(x)
        • Residual net
        •可以拟合出任意目标映射F(x),H(x)=F(x)+x
        •F(x)是残差映射,相对于identity来说
        •当H(x)最优映射接近identity时,很容易捕捉到小的扰动
         这并不是过拟合的问题,因为不光在测试集上误差增大,训练集本身误差也会增大。为解决这个问题,作者提出了一个Residual的结构:


        使用全等映射直接将前一层输出传到后面的思想,即增加一个identity mapping(恒等映射),就是ResNet的灵感来源。假定某段神经网络的输入是x,期望输出是H(x),如果我们直接把输入x传到输出作为初始结果,那么此时所需要学的函数H(x)转换成F(x)+x。上图为ResNet的残差学习单元,相当于将学习目标改变了,这一想法也是源于图像处理中的残差向量编码,通过一个reformulation,将一个问题分解成多个尺度直接的残差问题,能够很好的起到优化训练的效果。

• 其他设计
        •全是3x3卷积核
        •卷积步长2取代池化
        •使用Batch Normalization

        •取消
               •Max池化
               •全连接层
               •Dropout

        上图为VGGNet-19,以及34层深的普通卷积网络,和34层深的ResNet网络对比图。我们可以看到最大区别在于,ResNet有很多旁路将输入直接连到后面的层,使得后面的层可以直接学习残差,这种结构成为shortcut或skip connections。虽在plain上插入了shortcut结构,但这两个网络的参数量、计算量相同,而且ResNet的效果非常好,收敛速度比plain的要快得多。

• 更深网络:根据Bootleneck优化残差映射网络
        • 原始:3x3x256x256至3x3x256x256
        • 优化:1x1x256x64至3x3x64x64至1x1x64x256

        除了两层的残差学习单元,还有残层的残差学习单元,这相当于对于相同数量的层又减少了参数量,因此可以拓展成更深的模型

两层及三层的ResNet残差模块

        ResNet有50、101、152层等的神经网络,其中基础结构很相似,都是前面提到的两层和三层残差单元的堆叠。这些不仅没有出现退化问题,错误率也大大降低,而且消除了层数不断加深导致训练集误差增大的现象,同时计算复杂度也保持在很低的程度。


ResNet不同层数时的网络配置

        因使用ImageNet数据集非常耗时,因此本文会对完整的ResNet V2网络进行速度测试,评测forward耗时和backward耗时。若读者感兴趣,可自行下载ImageNet数据集进行训练测试。

        在准备工作就绪后,我们就可以搭建网络了。ResNet V2相对比较复杂,为了减少代码量能够构建层深的ResNet V2,  本文将采取辅助库进行实现。以下代码是根据本人对ResNet网络的理解和现有资源(《TensorFlow实战》等)整理而成,并根据自己认识添加了注释。代码注释若有错误请指正。
# -*- coding: utf-8 -*-import osos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'# ResNet V2# 载入模块、TensorFlowimport collectionsimport tensorflow as tfslim = tf.contrib.slim# 定义Blockclass Block(collections.namedtuple('Block', ['scope', 'unit_fn', 'args'])):    'A named tuple describing a ResNet block'# 定义降采样subsample方法def subsample(inputs, factor, scope=None):    if factor == 1:        return inputs    else:        return slim.max_pool2d(inputs, [1, 1], stride=factor, scope=scope)# 定义conv2d_same函数创建卷积层def conv2d_same(inputs, num_outputs, kernel_size, stride, scope=None):    if stride == 1:        return slim.conv2d(inputs, num_outputs, kernel_size, stride=1,                           padding='SAME', scope=scope)    else:        # kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1)        pad_total = kernel_size - 1        pad_beg = pad_total // 2        pad_end = pad_total - pad_beg        inputs = tf.pad(inputs,                        [[0, 0], [pad_beg, pad_end], [pad_beg, pad_end], [0, 0]])        return slim.conv2d(inputs, num_outputs, kernel_size, stride=stride,                           padding='VALID', scope=scope)@slim.add_arg_scope# 定义堆叠Blocks函数,两层循环def stack_blocks_dense(net, blocks,                       outputs_collections=None):    for block in blocks:        with tf.variable_scope(block.scope, 'block', [net]) as sc:            for i, unit in enumerate(block.args):                with tf.variable_scope('unit_%d' % (i + 1), values=[net]):                    unit_depth, unit_depth_bottleneck, unit_stride = unit                    net = block.unit_fn(net,                                        depth=unit_depth,                                        depth_bottleneck=unit_depth_bottleneck,                                        stride=unit_stride)            net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net)    return net# 创建ResNet通用arg_scope,定义函数默认参数值def resnet_arg_scope(is_training=True,                     weight_decay=0.0001,                     batch_norm_decay=0.997,                     batch_norm_epsilon=1e-5,                     batch_norm_scale=True):    batch_norm_params = {        'is_training': is_training,        'decay': batch_norm_decay,        'epsilon': batch_norm_epsilon,        'scale': batch_norm_scale,        'updates_collections': tf.GraphKeys.UPDATE_OPS,    }    with slim.arg_scope(            [slim.conv2d],            weights_regularizer=slim.l2_regularizer(weight_decay),            weights_initializer=slim.variance_scaling_initializer(),            activation_fn=tf.nn.relu,            normalizer_fn=slim.batch_norm,            normalizer_params=batch_norm_params):        with slim.arg_scope([slim.batch_norm], **batch_norm_params):            with slim.arg_scope([slim.max_pool2d], padding='SAME') as arg_sc:                return arg_sc@slim.add_arg_scope# 定义核心bottleneck残差学习单元def bottleneck(inputs, depth, depth_bottleneck, stride,               outputs_collections=None, scope=None):    with tf.variable_scope(scope, 'bottleneck_v2', [inputs]) as sc:        depth_in = slim.utils.last_dimension(inputs.get_shape(), min_rank=4)        preact = slim.batch_norm(inputs, activation_fn=tf.nn.relu, scope='preact')        if depth == depth_in:            shortcut = subsample(inputs, stride, 'shortcut')        else:            shortcut = slim.conv2d(preact, depth, [1, 1], stride=stride,                                   normalizer_fn=None, activation_fn=None,                                   scope='shortcut')        residual = slim.conv2d(preact, depth_bottleneck, [1, 1], stride=1,                               scope='conv1')        residual = conv2d_same(residual, depth_bottleneck, 3, stride,                               scope='conv2')        residual = slim.conv2d(residual, depth, [1, 1], stride=1,                               normalizer_fn=None, activation_fn=None,                               scope='conv3')        output = shortcut + residual        return slim.utils.collect_named_outputs(outputs_collections,                                                sc.name,                                                output)# 定义生成ResNet V2的主函数def resnet_v2(inputs,              blocks,              num_classes=None,              global_pool=True,              include_root_block=True,              reuse=None,              scope=None):    with tf.variable_scope(scope, 'resnet_v2', [inputs], reuse=reuse) as sc:        end_points_collection = sc.original_name_scope + '_end_points'        with slim.arg_scope([slim.conv2d, bottleneck,                             stack_blocks_dense],                            outputs_collections=end_points_collection):            net = inputs            if include_root_block:                with slim.arg_scope([slim.conv2d],                                    activation_fn=None, normalizer_fn=None):                    net = conv2d_same(net, 64, 7, stride=2, scope='conv1')                net = slim.max_pool2d(net, [3, 3], stride=2, scope='pool1')            net = stack_blocks_dense(net, blocks)            net = slim.batch_norm(net, activation_fn=tf.nn.relu, scope='postnorm')            if global_pool:                # Global average pooling.                net = tf.reduce_mean(net, [1, 2], name='pool5', keep_dims=True)            if num_classes is not None:                net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,                                  normalizer_fn=None, scope='logits')            # Convert end_points_collection into a dictionary of end_points.            end_points = slim.utils.convert_collection_to_dict(end_points_collection)            if num_classes is not None:                end_points['predictions'] = slim.softmax(net, scope='predictions')            return net, end_points# 设计层数为50的ResNet V2def resnet_v2_50(inputs,                 num_classes=None,                 global_pool=True,                 reuse=None,                 scope='resnet_v2_50'):    blocks = [        Block('block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),        Block(            'block2', bottleneck, [(512, 128, 1)] * 3 + [(512, 128, 2)]),        Block(            'block3', bottleneck, [(1024, 256, 1)] * 5 + [(1024, 256, 2)]),        Block(            'block4', bottleneck, [(2048, 512, 1)] * 3)]    return resnet_v2(inputs, blocks, num_classes, global_pool,                     include_root_block=True, reuse=reuse, scope=scope)# 设计101层的ResNet V2def resnet_v2_101(inputs,                  num_classes=None,                  global_pool=True,                  reuse=None,                  scope='resnet_v2_101'):    blocks = [        Block(            'block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),        Block(            'block2', bottleneck, [(512, 128, 1)] * 3 + [(512, 128, 2)]),        Block(            'block3', bottleneck, [(1024, 256, 1)] * 22 + [(1024, 256, 2)]),        Block(            'block4', bottleneck, [(2048, 512, 1)] * 3)]    return resnet_v2(inputs, blocks, num_classes, global_pool,                     include_root_block=True, reuse=reuse, scope=scope)# 设计152层的ResNet V2def resnet_v2_152(inputs,                  num_classes=None,                  global_pool=True,                  reuse=None,                  scope='resnet_v2_152'):    blocks = [        Block(            'block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),        Block(            'block2', bottleneck, [(512, 128, 1)] * 7 + [(512, 128, 2)]),        Block(            'block3', bottleneck, [(1024, 256, 1)] * 35 + [(1024, 256, 2)]),        Block(            'block4', bottleneck, [(2048, 512, 1)] * 3)]    return resnet_v2(inputs, blocks, num_classes, global_pool,                     include_root_block=True, reuse=reuse, scope=scope)# 设计200层的ResNet V2def resnet_v2_200(inputs,                  num_classes=None,                  global_pool=True,                  reuse=None,                  scope='resnet_v2_200'):    blocks = [        Block(            'block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),        Block(            'block2', bottleneck, [(512, 128, 1)] * 23 + [(512, 128, 2)]),        Block(            'block3', bottleneck, [(1024, 256, 1)] * 35 + [(1024, 256, 2)]),        Block(            'block4', bottleneck, [(2048, 512, 1)] * 3)]    return resnet_v2(inputs, blocks, num_classes, global_pool,                     include_root_block=True, reuse=reuse, scope=scope)from datetime import datetimeimport mathimport time# 评测函数def time_tensorflow_run(session, target, info_string):    num_steps_burn_in = 10    total_duration = 0.0    total_duration_squared = 0.0    for i in range(num_batches + num_steps_burn_in):        start_time = time.time()        _ = session.run(target)        duration = time.time() - start_time        if i >= num_steps_burn_in:            if not i % 10:                print('%s: step %d, duration = %.3f' %                      (datetime.now(), i - num_steps_burn_in, duration))            total_duration += duration            total_duration_squared += duration * duration    mn = total_duration / num_batches    vr = total_duration_squared / num_batches - mn * mn    sd = math.sqrt(vr)    print('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %          (datetime.now(), info_string, num_batches, mn, sd))batch_size = 32height, width = 224, 224inputs = tf.random_uniform((batch_size, height, width, 3))with slim.arg_scope(resnet_arg_scope(is_training=False)):    net, end_points = resnet_v2_152(inputs, 1000) # 152层评测init = tf.global_variables_initializer()sess = tf.Session()sess.run(init)num_batches = 100time_tensorflow_run(sess, net, "Forward")
        运行程序,我们会看到如下的程序显示(Forward性能测试)

2017-10-15 10:59:00.831156: step 0, duration = 8.9542017-10-15 11:00:30.933252: step 10, duration = 9.0482017-10-15 11:02:01.370461: step 20, duration = 8.9992017-10-15 11:03:31.873238: step 30, duration = 8.9532017-10-15 11:05:03.045593: step 40, duration = 9.3602017-10-15 11:06:33.642941: step 50, duration = 9.0372017-10-15 11:08:03.993324: step 60, duration = 8.9982017-10-15 11:09:34.304207: step 70, duration = 9.1702017-10-15 11:11:05.943414: step 80, duration = 9.0682017-10-15 11:12:38.635693: step 90, duration = 9.2852017-10-15 11:14:03.069851: Forward across 100 steps, 9.112 +/- 0.153 sec / batch
        以上为程序运行过程中显示的ResNet V2的forward运算时间,backward读者可自行添加。

        至此,ResNet的基本原理和TensorFlow实现ResNet的工作就完成了,代码中有不同层数的ResNet深度设计,读者可自行修改代码来探索不同深度时网络的性能。ResNet拥有非常精妙的设计和构造,具有里程碑式的意义,真正实现了极深网络的训练,也提供了诸多可以借鉴的CNN设计思想和Trick,并取得了非常好的效果。

       在后续工作中,我将继续为大家展现TensorFlow和深度学习网络带来的无尽乐趣,我将和大家一起探讨深度学习的奥秘。当然,如果你感兴趣,我的Weibo将与你一起分享最前沿的人工智能、机器学习、深度学习与计算机视觉方面的技术。

阅读全文
0 0
原创粉丝点击