图像语义分割代码实现(1)

来源:互联网 发布:linux禅道服务启动 编辑:程序博客网 时间:2024/05/19 10:12

针对《图像语义分割(1)- FCN》介绍的FCN算法,以官方的代码为基础,在 SIFT-Flow 数据集上做训练和测试。

介绍了如何制作自己的训练数据


数据准备

参考文章《FCN网络的训练——以SIFT-Flow 数据集为例》

1) 首先 clone 官方工程

git clone https://github.com/shelhamer/fcn.berkeleyvision.org.git

工程是基于 CAFFE 的,所以也需要提前安装好

2)下载数据集及模型
- 到这里下载 SIFT-Flow 数据集,解压缩到 fcn/data/sift-flow/ 下
- 到这里下载 VGG-16 预训练模型,移动到 fcn/ilsvrc-nets/ 下
- 参考文章《 FCN模型训练中遇到的困难》,到这里下载 VGG_ILSVRC_16_layers_deploy.prototxt
 或者直接 copy 以下内容:

name: "VGG_ILSVRC_16_layers"input: "data"input_dim: 10input_dim: 3input_dim: 224input_dim: 224layers {  bottom: "data"  top: "conv1_1"  name: "conv1_1"  type: CONVOLUTION  convolution_param {    num_output: 64    pad: 1    kernel_size: 3  }}layers {  bottom: "conv1_1"  top: "conv1_1"  name: "relu1_1"  type: RELU}layers {  bottom: "conv1_1"  top: "conv1_2"  name: "conv1_2"  type: CONVOLUTION  convolution_param {    num_output: 64    pad: 1    kernel_size: 3  }}layers {  bottom: "conv1_2"  top: "conv1_2"  name: "relu1_2"  type: RELU}layers {  bottom: "conv1_2"  top: "pool1"  name: "pool1"  type: POOLING  pooling_param {    pool: MAX    kernel_size: 2    stride: 2  }}layers {  bottom: "pool1"  top: "conv2_1"  name: "conv2_1"  type: CONVOLUTION  convolution_param {    num_output: 128    pad: 1    kernel_size: 3  }}layers {  bottom: "conv2_1"  top: "conv2_1"  name: "relu2_1"  type: RELU}layers {  bottom: "conv2_1"  top: "conv2_2"  name: "conv2_2"  type: CONVOLUTION  convolution_param {    num_output: 128    pad: 1    kernel_size: 3  }}layers {  bottom: "conv2_2"  top: "conv2_2"  name: "relu2_2"  type: RELU}layers {  bottom: "conv2_2"  top: "pool2"  name: "pool2"  type: POOLING  pooling_param {    pool: MAX    kernel_size: 2    stride: 2  }}layers {  bottom: "pool2"  top: "conv3_1"  name: "conv3_1"  type: CONVOLUTION  convolution_param {    num_output: 256    pad: 1    kernel_size: 3  }}layers {  bottom: "conv3_1"  top: "conv3_1"  name: "relu3_1"  type: RELU}layers {  bottom: "conv3_1"  top: "conv3_2"  name: "conv3_2"  type: CONVOLUTION  convolution_param {    num_output: 256    pad: 1    kernel_size: 3  }}layers {  bottom: "conv3_2"  top: "conv3_2"  name: "relu3_2"  type: RELU}layers {  bottom: "conv3_2"  top: "conv3_3"  name: "conv3_3"  type: CONVOLUTION  convolution_param {    num_output: 256    pad: 1    kernel_size: 3  }}layers {  bottom: "conv3_3"  top: "conv3_3"  name: "relu3_3"  type: RELU}layers {  bottom: "conv3_3"  top: "pool3"  name: "pool3"  type: POOLING  pooling_param {    pool: MAX    kernel_size: 2    stride: 2  }}layers {  bottom: "pool3"  top: "conv4_1"  name: "conv4_1"  type: CONVOLUTION  convolution_param {    num_output: 512    pad: 1    kernel_size: 3  }}layers {  bottom: "conv4_1"  top: "conv4_1"  name: "relu4_1"  type: RELU}layers {  bottom: "conv4_1"  top: "conv4_2"  name: "conv4_2"  type: CONVOLUTION  convolution_param {    num_output: 512    pad: 1    kernel_size: 3  }}layers {  bottom: "conv4_2"  top: "conv4_2"  name: "relu4_2"  type: RELU}layers {  bottom: "conv4_2"  top: "conv4_3"  name: "conv4_3"  type: CONVOLUTION  convolution_param {    num_output: 512    pad: 1    kernel_size: 3  }}layers {  bottom: "conv4_3"  top: "conv4_3"  name: "relu4_3"  type: RELU}layers {  bottom: "conv4_3"  top: "pool4"  name: "pool4"  type: POOLING  pooling_param {    pool: MAX    kernel_size: 2    stride: 2  }}layers {  bottom: "pool4"  top: "conv5_1"  name: "conv5_1"  type: CONVOLUTION  convolution_param {    num_output: 512    pad: 1    kernel_size: 3  }}layers {  bottom: "conv5_1"  top: "conv5_1"  name: "relu5_1"  type: RELU}layers {  bottom: "conv5_1"  top: "conv5_2"  name: "conv5_2"  type: CONVOLUTION  convolution_param {    num_output: 512    pad: 1    kernel_size: 3  }}layers {  bottom: "conv5_2"  top: "conv5_2"  name: "relu5_2"  type: RELU}layers {  bottom: "conv5_2"  top: "conv5_3"  name: "conv5_3"  type: CONVOLUTION  convolution_param {    num_output: 512    pad: 1    kernel_size: 3  }}layers {  bottom: "conv5_3"  top: "conv5_3"  name: "relu5_3"  type: RELU}layers {  bottom: "conv5_3"  top: "pool5"  name: "pool5"  type: POOLING  pooling_param {    pool: MAX    kernel_size: 2    stride: 2  }}layers {  bottom: "pool5"  top: "fc6"  name: "fc6"  type: INNER_PRODUCT  inner_product_param {    num_output: 4096  }}layers {  bottom: "fc6"  top: "fc6"  name: "relu6"  type: RELU}layers {  bottom: "fc6"  top: "fc6"  name: "drop6"  type: DROPOUT  dropout_param {    dropout_ratio: 0.5  }}layers {  bottom: "fc6"  top: "fc7"  name: "fc7"  type: INNER_PRODUCT  inner_product_param {    num_output: 4096  }}layers {  bottom: "fc7"  top: "fc7"  name: "relu7"  type: RELU}layers {  bottom: "fc7"  top: "fc7"  name: "drop7"  type: DROPOUT  dropout_param {    dropout_ratio: 0.5  }}layers {  bottom: "fc7"  top: "fc8"  name: "fc8"  type: INNER_PRODUCT  inner_product_param {    num_output: 1000  }}layers {  bottom: "fc8"  top: "prob"  name: "prob"  type: SOFTMAX}

训练脚本修改

1)生成 test、trainval、deploy

a. 执行 fcn/siftflow-fcn32s/net.py 生成 test.prototxt 和 trainval.prototxt
b. cp test.prototxt 为 deploy.protxt

将第一个 data 层换成

layer {  name: "input"  type: "Input"  top: "data"  input_param {    # These dimensions are purely for sake of example;    # see infer.py for how to reshape the net to the given input size.    shape { dim: 1 dim: 3 dim: 256 dim: 256 }  }}

删除网络后面包含 loss 的层(一共2个)

2)修改 fcn/siftflow-fcn32s/solve.py

import caffeimport surgery, scoreimport numpy as npimport osimport systry:    import setproctitle    setproctitle.setproctitle(os.path.basename(os.getcwd()))except:    passvgg_weights = '../ilsvrc-nets/vgg16-fcn.caffemodel'vgg_proto = '../ilsvrc-nets/VGG_ILSVRC_16_layers_deploy.prototxt'# initcaffe.set_device(0)caffe.set_mode_gpu()solver = caffe.SGDSolver('solver.prototxt')#solver.net.copy_from(weights)vgg_net = caffe.Net(vgg_proto, vgg_weights, caffe.TRAIN)surgery.transplant(solver.net, vgg_net)del vgg_net# surgeriesinterp_layers = [k for k in solver.net.params.keys() if 'up' in k]surgery.interp(solver.net, interp_layers)# scoringtest = np.loadtxt('../data/sift-flow/test.txt', dtype=str)for _ in range(50):    solver.step(2000)    # N.B. metrics on the semantic labels are off b.c. of missing classes;    # score manually from the histogram instead for proper evaluation    score.seg_tests(solver, False, test, layer='score_sem', gt='sem')    score.seg_tests(solver, False, test, layer='score_geo', gt='geo')

3)修改 fcn/siftflow-fcn32s/solve.prototxt
添加快照设置:

snapshot:4000snapshot_prefix:"snapshot/train"

训练及测试

1) 复制 fcn/ 下的 infer.py、score.py、siftflow_layers.py、surgery.py 到 fcn/siftflow-fcn32s 下

2)python train.py 开始训练

3)修改 infer.py 的模型路径及测试图片路径

          这里写图片描述
                       图1. 迭代72000次的分割结果

4)之后可以以 fcn32s 的训练结果为基础,训练 fcn16s 和 fcn8s
 需要注意的是,对于 fcn16s 和 fcn8s,由于不需要重新构造网络层,因此 solve.py 不需要改

import caffeimport surgery, scoreimport numpy as npimport osimport systry:    import setproctitle    setproctitle.setproctitle(os.path.basename(os.getcwd()))except:    passweights = '../siftflow-fcn32s/snapshot/train_iter_100000.caffemodel'# initcaffe.set_device(0)caffe.set_mode_gpu()solver = caffe.SGDSolver('solver.prototxt')solver.net.copy_from(weights)# surgeriesinterp_layers = [k for k in solver.net.params.keys() if 'up' in k]surgery.interp(solver.net, interp_layers)# scoringtest = np.loadtxt('../data/sift-flow/test.txt', dtype=str)for _ in range(50):    solver.step(2000)    # N.B. metrics on the semantic labels are off b.c. of missing classes;    # score manually from the histogram instead for proper evaluation    score.seg_tests(solver, False, test, layer='score_sem', gt='sem')    score.seg_tests(solver, False, test, layer='score_geo', gt='geo')

如何制作自己的训练数据

相比 detect(使用LabelImg框选目标),segment的数据需要耗费很大精力去准备

参考这篇帖子,MIT提供了一个在线标注多边形的工具LabelMe,但一般在工程上,为了尽量精确,更多还是使用 photoshop 的“快速选择”工具

1)首先用 ps 打开待标记图像,“图像->模式->灰度”,将图像转为灰度图
2)使用“快速选择”工具,选出目标区域,“右键->填充->颜色”,假设该区域的 label 为 9 ,那么设置 RGB 为 (9,9,9)

           这里写图片描述
                           图2. 选择区域并填充

3)所有类别填充完成后,“文件->存储为”label 图像

注意:以上方法针对 SegNet 里的 CamVid 数据格式(图3)

                       这里写图片描述
                         图3. CamVid 数据格式

如图3所示,train和test里为RGB图像,trainannot和testannot里为标记过的label图像(灰度)
      一组训练(图3右)数据包含两张图像


原创粉丝点击