py-faster-rcnn + cpu安装及训练自己的数据集

来源:互联网 发布:记事本编程是什么语言 编辑:程序博客网 时间:2024/05/16 18:10

本文安装Python版本的faster-rcnn项目。 
matlab版本请移步:https://github.com/ShaoqingRen/faster_rcnn 
python版本项目主页:https://github.com/rbgirshick/py-faster-rcnn

下面分两部分来讲解,第一部分为py-faster-rcnn及caffe框架安装。第二部分讲解如何修改相关文件,训练自己的数据集。

第一部分 安装

把项目代码clone下来, 注意是递归下载,里面包含项目自带的caffe,跟原版的caffe略有不同,一定要注意:

git clone --recursive  https://github.com/rbgirshick/py-faster-rcnn.git
  • 1
  • 1

安装python包cython, 使用pip安装:

sudo pip install cython
  • 1
  • 1

编译cython

cd /py-faster-rcnn/lib修改setup.py文件,注释掉GPU相关代码,如下:#CUDA = locate_cuda()#self.set_executable('compiler_so', CUDA['nvcc'])# Extension('nms.gpu_nms',        ['nms/nms_kernel.cu', 'nms/gpu_nms.pyx'],        library_dirs=[CUDA['lib64']],        libraries=['cudart'],        language='c++',        runtime_library_dirs=[CUDA['lib64']],        # this syntax is specific to this build system        # we're only going to use certain compiler args with nvcc and not with        # gcc the implementation of this trick is in customize_compiler() below extra_compile_args={'gcc': ["-Wno-unused-function"],        'nvcc': ['-arch=sm_35','--ptxas-options=-v','-c','--compiler-options',"'-fPIC'"]},include_dirs = [numpy_include, CUDA['include']]),setup.py修改完成后,执行make
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

编译caffe(项目自带的caffe-fast-rcnn) 
具体安装方法,详见我的另一篇博客,地址:http://blog.csdn.net/zhang_shuai12/article/details/52289825 
注意执行完make, 别忘了make pycaffe

到这里, 我们就完成了所得的安装。

第二部分 训练自己的数据集

首先,我们先运行一下项目自身的demo 
下载论文中提到的pascal voc 2007 数据集,pascal voc 2012数据集类似可下载:

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tarwget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tarwget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit_08-Jun-2007.tar
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

下载完成后,解压缩:

tar xvf VOCtrainval_06-Nov-2007.tartar xvf VOCtest_06-Nov-2007.tartar xvf VOCdevkit_08-Jun-2007.tar
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

解压缩完成后,会形成如下的目录结构:

$VOCdevkit/$VOCdevkit/VOCcode/$VOCdevkit/VOC2007
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

我们对pascal voc2007 数据集做一个链接:

cd py-faster-rcnn/dataln -s /path/to/VOCdevkit VOCdevkit2007
  • 1
  • 2
  • 1
  • 2

现在,在data目录下会出现VOCdevkit2007文件夹, 结构与VOCdevkit相同。

训练数据集, 论文提供几种训练方式, 具体参数可看代码做相应设置:

cd py-fatser-rcnn./experiments/scripts/faster_rcnn_alt_opt.sh [NET] [--set ...]or./experiments/scripts/faster_rcnn_end2end.sh [NET] [--set ...]
  • 1
  • 2
  • 3
  • 4
  • 1
  • 2
  • 3
  • 4

或者我们也可以运行下面命令进行训练:

./tools/train_net.py --cpu --solver path/to/solver.prototxt --weights path/to/pretrain_model --imdb voc_2007_trainval --iters 100000 --cfgs experiments/cfgs/faster_rcnn_end2end.yml
  • 1
  • 1

测试可使用如下命令行:

./tools/test_net.py --cpu --def path/to/test.prototxt --net path/to/your/final.model --imdb voc_2007_test --cfgs experiments/cfgs/faster_rcnn_end2end.yml
  • 1
  • 1

论文作者已经预训练好几种模型, 我们下载即可:

cd py-faster-rcnn./data/scripts/fetch_faster_rcnn_models.sh./data/scripts/fetch_imagenet_models.sh./data/scripts/fetch_selective_search_data.sh
  • 1
  • 2
  • 3
  • 4
  • 1
  • 2
  • 3
  • 4

下载完成后, 解压缩模型,然后运行:

./tools/demo.py
  • 1
  • 1

会看到图像识别结果。

ok, 下面我们试着修改文件,换成自己的数据集,进行训练和测试。替换过程中,我们依然保持文件夹名称不变,只置换里面的数据。

我们下载解压后的数据集存储在:

/data/VOCdevkit2007/VOC2007VOC2007下有三个文件夹:VOC2007/AnnotationsVOC2007/ImageSetsVOC2007/JPEGImages
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

Annotations: 存储与图片对应的xml文件,每个xml文件包含object的bbox,name等 
ImageSets: 存储切分好的数据集,包括train,val,test等文件; 
JPEGImages: 存储所有的图片;

我们把自己的训练集替换掉原始的文件,放到相应的上述三个目录下即可。

另外, 在标注数据的过程中, 发现一个非常方便的标注软件labelImg, 在图片上画出框之后,会自动生成训练所需格式的xml文件。 
github项目主页地址:https://github.com/tzutalin/labelImg 
感兴趣的可以自行下载安装使用。

数据集替换问题解决了,接下来,我们一步步的修改文件:

cd /lib/datasets
  • 1
  • 1

首先,修改pascal_voc.py, 
self.CLASS = ( 
background‘, #保留这个类别 
。。。 

改成你自己要识别的object 类别,比如我要识别people,只需把people类别加进去即可。

然后修改模型参数, 以训练faster_rcnn_end2end模型为例:

cd /models/pascal_voc/VGG16/faster_rcnn_end2end
  • 1
  • 1

包括三个prototxt文件,分别修改如下: 
对于solver.prototxt, 模型参数初始化,一般不需要修改; 
对于train.prototxt, 修改如下几个地方: 
①num_classes : 设置为你的识别类别数, 比如说我只识别people这一类, 则为2(1 + 背景类) 
②cls_score 层, inner_product_param参数, num_output :也设置为2(1 + 背景类) 
③bbox_pred层, inner_product_param参数, num_output:设置为(1 + 背景类) ×4 
对于test.prototxt文件, 修改如下: 
①cls_score层, inner_product_param参数, num_output: 设置为2(1 + 背景类)

到这里,我们针对自己的数据集,要修改的都已经改完了。训练的命令行再上面已经给出,这里不在写出。接下来就是耐心的等待训练结果啦。

训练完成后, 可以运行demo.py 来查看识别结果。注意,别忘了修改 CLASS的类别。另外,demo.py每次识别一张图片,若包含多个不同的类别,识别结果不能显示在同一张图片上, 可略做修改即可。

参考: 
【1】https://github.com/rbgirshick/py-faster-rcnn 
【2】Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks by Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun NIPS 2015 
【3】https://github.com/tzutalin/labelImg 
【4】http://caffe.berkeleyvision.org/installation.html#compilation

Faster R-CNN是我科大师弟任少卿在微软研究院实习时完成的,现在用深度学习做图像分割和目标检测最快的算法。

  • 下载代码和数据
git clone --recursive https://github.com/rbgirshick/py-faster-rcnn.git

  • 下载demo模型数据
[root@localhost py-faster-rcnn]# ./data/scripts/fetch_faster_rcnn_models.sh
Downloading Faster R-CNN demo models (695M)...
。。。
Unzipping...
faster_rcnn_models/
faster_rcnn_models/ZF_faster_rcnn_final.caffemodel
faster_rcnn_models/VGG16_faster_rcnn_final.caffemodel


  • 编译cython
进入lib目录,修改setup.py,注释掉GPU相关代码,如下

。。。
#CUDA = locate_cuda()

。。。
           self.set_executable('compiler_so', CUDA['nvcc'])
。。。
   Extension('nms.gpu_nms',
       ['nms/nms_kernel.cu', 'nms/gpu_nms.pyx'],
       library_dirs=[CUDA['lib64']],
       libraries=['cudart'],
       language='c++',
       runtime_library_dirs=[CUDA['lib64']],
       # this syntax is specific to this build system
       # we're only going to use certain compiler args with nvcc and not with
       # gcc the implementation of this trick is in customize_compiler() below
       extra_compile_args={'gcc': ["-Wno-unused-function"],
                           'nvcc': ['-arch=sm_35',
                                    '--ptxas-options=-v',
                                    '-c',
                                    '--compiler-options',
                                    "'-fPIC'"]},
       include_dirs = [numpy_include, CUDA['include']]
   ),
。。。

编译:
[root@localhost lib]# make

  • 安装caffe(自带的,不是通用的)
进入caffe-fast-rcnn目录,大部分跟前面caffe安装记录一文一样,修改Makefile.config为

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
# USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
   You should not set this flag if you will be reading LMDBs with any
   possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
# CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
#CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
       -gencode arch=compute_20,code=sm_21 \
       -gencode arch=compute_30,code=sm_30 \
       -gencode arch=compute_35,code=sm_35 \
       -gencode arch=compute_50,code=sm_50 \
       -gencode arch=compute_50,code=compute_50

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
BLAS_INCLUDE := /usr/include/atlas-x86_64-base
BLAS_LIB := /usr/lib64/atlas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
                  /usr/lib64/python2.7/site-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
        # $(ANACONDA_HOME)/include/python2.7 \
        # $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

# Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
                /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib64
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/lib64

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
# TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

修改Makefile
LIBRARIES += satlas tatlas #新版atlas已经不用这两个lib了:cblas atlas

编译caffe和pycaffe
 [root@localhost caffe-fast-rcnn]# make -j8 && make pycaffe

  • 跑demo
[root@localhost py-faster-rcnn]# ./tools/demo.py
Traceback (most recent call last):
File "./tools/demo.py", line 17, in
 from fast_rcnn.config import cfg
 File "/root/zhanxiang/work/py-faster-rcnn/tools/../lib/fast_rcnn/config.py", line 23, in
from easydict import EasyDict as edict
ImportError: No module named easydict

缺少Python库easydict,所以安装 pip install easydict

[root@localhost py-faster-rcnn]# ./tools/demo.py
Traceback (most recent call last):
  File "./tools/demo.py", line 18, in
    from fast_rcnn.test import im_detect
  File "/root/zhanxiang/work/py-faster-rcnn/tools/../lib/fast_rcnn/test.py", line 15, in
    import cv2
ImportError: No module named cv2

缺少Python库cv2,这个是openCV里面的。那就来装openCV python库
yum install opencv-python.x86_64

[root@localhost py-faster-rcnn]# python tools/demo.py --cpu
Traceback (most recent call last):
  File "tools/demo.py", line 21, in
    import matplotlib.pyplot as plt
  File "/usr/lib64/python2.7/site-packages/matplotlib/pyplot.py", line 26, in
    from matplotlib.figure import Figure, figaspect
  File "/usr/lib64/python2.7/site-packages/matplotlib/figure.py", line 36, in
    from matplotlib.axes import Axes, SubplotBase, subplot_class_factory
  File "/usr/lib64/python2.7/site-packages/matplotlib/axes/__init__.py", line 4, in
    from ._subplots import *
  File "/usr/lib64/python2.7/site-packages/matplotlib/axes/_subplots.py", line 10, in
    from matplotlib.axes._axes import Axes
  File "/usr/lib64/python2.7/site-packages/matplotlib/axes/_axes.py", line 14, in
    from matplotlib import unpack_labeled_data
ImportError: cannot import name unpack_labeled_data

看起来跟matplotlib库有关,pip install的版本太旧,直接下载源码安装
按照官网指示,http://matplotlib.org/faq/installing_faq.html#install-from-git

[root@localhost work]# git clone git://github.com/matplotlib/matplotlib.git
[root@localhost work]# cd matplotlib/
安装依赖包
[root@localhost matplotlib]# yum-builddep python-matplotlib
安装
[root@localhost matplotlib]# python setup.py install

[root@localhost py-faster-rcnn]# python tools/demo.py --cpu
Traceback (most recent call last):
  File "tools/demo.py", line 19, in
    from fast_rcnn.nms_wrapper import nms
  File "/root/zhanxiang/work/py-faster-rcnn/tools/../lib/fast_rcnn/nms_wrapper.py", line 9, in
    from nms.gpu_nms import gpu_nms
ImportError: No module named gpu_nms

修改nms_wrapper.py,改force_cpu =True
[root@localhost py-faster-rcnn]# vi lib/fast_rcnn/nms_wrapper.py
def nms (dets, thresh, force_cpu =True):

  • 大功告成
[root@localhost py-faster-rcnn]# python tools/demo.py --cpu
就能看到结果了

0 0
原创粉丝点击