caffe例程-RCNN-detectio.ipynb

来源:互联网 发布:积分或换软件 编辑:程序博客网 时间:2024/06/18 18:41

下文简述caffe example下的detectio.ipynb的流程和几个错误及其解决办法:

1.在命令行中打开ipython notebook,而不要使用PyChram。在编译这个例程时,后者会有莫名其妙的错误:

Traceback (most recent call last):  File "../python/detect.py", line 173, in <module>    main(sys.argv)  File "../python/detect.py", line 144, in main    detections = detector.detect_selective_search(inputs)  File "/home/ljj/deep_learning/caffe-master/python/caffe/detector.py", line 120, in detect_selective_search    cmd='selective_search_rcnn'  File "/home/ljj/deep_learning/caffe-master/python/selective_search_ijcv_with_python/selective_search.py", line 36, in get_windows    shlex.split(mc), stdout=open('/dev/null', 'w'), cwd=script_dirname)  File "/usr/lib/python2.7/subprocess.py", line 711, in __init__    errread, errwrite)  File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child    raise child_exceptionOSError: [Errno 2] No such file or directory


2.将https://github.com/sergeyk/selective_search_ijcv_with_python下载并解压到/home/ljj/deep_learning/caffe-master/python,并重命名为selective_search_ijcv_with_python。

3.打开matlab,运行其中的demo.m

4.为matlab添加PATH,使得可以在terminal中输入matlab,就能打开matlab,具体方法见我的另一篇博客。

5.使用python2.7-anaconda

6.在完成上述以后,在ipython botebook中运行时,应当还是会报出几个错误,此时根据错误信息修改/home/ljj/deep_learning/caffe-master/python/caffe/detector.py如下(其实就是加了几个int(),使得slice中float转换为int)

#!/usr/bin/env python"""Do windowed detection by classifying a number of images/crops at once,optionally using the selective search window proposal method.This implementation follows ideas in    Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik.    Rich feature hierarchies for accurate object detection and semantic    segmentation.    http://arxiv.org/abs/1311.2524The selective_search_ijcv_with_python code required for the selective searchproposal mode is available at    https://github.com/sergeyk/selective_search_ijcv_with_python"""import numpy as npimport osimport caffeclass Detector(caffe.Net):    """    Detector extends Net for windowed detection by a list of crops or    selective search proposals.    Parameters    ----------    mean, input_scale, raw_scale, channel_swap : params for preprocessing        options.    context_pad : amount of surrounding context to take s.t. a `context_pad`        sized border of pixels in the network input image is context, as in        R-CNN feature extraction.    """    def __init__(self, model_file, pretrained_file, mean=None,                 input_scale=None, raw_scale=None, channel_swap=None,                 context_pad=None):        caffe.Net.__init__(self, model_file, pretrained_file, caffe.TEST)        # configure pre-processing        in_ = self.inputs[0]        self.transformer = caffe.io.Transformer(            {in_: self.blobs[in_].data.shape})        self.transformer.set_transpose(in_, (2, 0, 1))        if mean is not None:            self.transformer.set_mean(in_, mean)        if input_scale is not None:            self.transformer.set_input_scale(in_, input_scale)        if raw_scale is not None:            self.transformer.set_raw_scale(in_, raw_scale)        if channel_swap is not None:            self.transformer.set_channel_swap(in_, channel_swap)        self.configure_crop(context_pad)    def detect_windows(self, images_windows):        """        Do windowed detection over given images and windows. Windows are        extracted then warped to the input dimensions of the net.        Parameters        ----------        images_windows: (image filename, window list) iterable.        context_crop: size of context border to crop in pixels.        Returns        -------        detections: list of {filename: image filename, window: crop coordinates,            predictions: prediction vector} dicts.        """        # Extract windows.        window_inputs = []        for image_fname, windows in images_windows:            image = caffe.io.load_image(image_fname).astype(np.float32)            for window in windows:                window_inputs.append(self.crop(image, window))        # Run through the net (warping windows to input dimensions).        in_ = self.inputs[0]        caffe_in = np.zeros((len(window_inputs), window_inputs[0].shape[2])                            + self.blobs[in_].data.shape[2:],                            dtype=np.float32)        for ix, window_in in enumerate(window_inputs):            caffe_in[ix] = self.transformer.preprocess(in_, window_in)        out = self.forward_all(**{in_: caffe_in})        predictions = out[self.outputs[0]]     #origin        # Package predictions with images and windows.        detections = []        ix = 0        for image_fname, windows in images_windows:            for window in windows:                detections.append({                    'window': window,                    'prediction': predictions[ix],                    'filename': image_fname                })                ix += 1        return detections    def detect_selective_search(self, image_fnames):        """        Do windowed detection over Selective Search proposals by extracting        the crop and warping to the input dimensions of the net.        Parameters        ----------        image_fnames: list        Returns        -------        detections: list of {filename: image filename, window: crop coordinates,            predictions: prediction vector} dicts.        """        import selective_search_ijcv_with_python as selective_search        # Make absolute paths so MATLAB can find the files.        image_fnames = [os.path.abspath(f) for f in image_fnames]        windows_list = selective_search.get_windows(            image_fnames,            cmd='selective_search_rcnn'        )        # Run windowed detection on the selective search list.        return self.detect_windows(zip(image_fnames, windows_list))    def crop(self, im, window):        """        Crop a window from the image for detection. Include surrounding context        according to the `context_pad` configuration.        Parameters        ----------        im: H x W x K image ndarray to crop.        window: bounding box coordinates as ymin, xmin, ymax, xmax.        Returns        -------        crop: cropped window.        """        # Crop window from the image.        crop = im[int(window[0]):int(window[2]), int(window[1]):int(window[3])]        if self.context_pad:            box = window.copy()            crop_size = self.blobs[self.inputs[0]].width  # assumes square            scale = crop_size / (1. * crop_size - self.context_pad * 2)            # Crop a box + surrounding context.            half_h = (box[2] - box[0] + 1) / 2.            half_w = (box[3] - box[1] + 1) / 2.            center = (box[0] + half_h, box[1] + half_w)            scaled_dims = scale * np.array((-half_h, -half_w, half_h, half_w))            box = np.round(np.tile(center, 2) + scaled_dims)            full_h = box[2] - box[0] + 1            full_w = box[3] - box[1] + 1            scale_h = crop_size / full_h            scale_w = crop_size / full_w            pad_y = round(max(0, -box[0]) * scale_h)  # amount out-of-bounds            pad_x = round(max(0, -box[1]) * scale_w)            # Clip box to image dimensions.            im_h, im_w = im.shape[:2]            box = np.clip(box, 0., [im_h, im_w, im_h, im_w])            clip_h = box[2] - box[0] + 1            clip_w = box[3] - box[1] + 1            assert(clip_h > 0 and clip_w > 0)            crop_h = round(clip_h * scale_h)            crop_w = round(clip_w * scale_w)            if pad_y + crop_h > crop_size:                crop_h = crop_size - pad_y            if pad_x + crop_w > crop_size:                crop_w = crop_size - pad_x            # collect with context padding and place in input            # with mean padding            context_crop = im[int(box[0]):int(box[2]), int(box[1]):int(box[3])]            context_crop = caffe.io.resize_image(context_crop, (crop_h, crop_w))            crop = np.ones(self.crop_dims, dtype=np.float32) * self.crop_mean            crop[int(pad_y):int(pad_y + crop_h), int(pad_x):int(pad_x + crop_w)] = context_crop        return crop    def configure_crop(self, context_pad):        """        Configure crop dimensions and amount of context for cropping.        If context is included, make the special input mean for context padding.        Parameters        ----------        context_pad : amount of context for cropping.        """        # crop dimensions        in_ = self.inputs[0]        tpose = self.transformer.transpose[in_]        inv_tpose = [tpose[t] for t in tpose]        self.crop_dims = np.array(self.blobs[in_].data.shape[1:])[inv_tpose]        #.transpose(inv_tpose)        # context padding        self.context_pad = context_pad        if self.context_pad:            in_ = self.inputs[0]            transpose = self.transformer.transpose.get(in_)            channel_order = self.transformer.channel_swap.get(in_)            raw_scale = self.transformer.raw_scale.get(in_)            # Padding context crops needs the mean in unprocessed input space.            mean = self.transformer.mean.get(in_)            if mean is not None:                inv_transpose = [transpose[t] for t in transpose]                crop_mean = mean.copy().transpose(inv_transpose)                if channel_order is not None:                    channel_order_inverse = [channel_order.index(i)                                             for i in range(crop_mean.shape[2])]                    crop_mean = crop_mean[:, :, channel_order_inverse]                if raw_scale is not None:                    crop_mean /= raw_scale                self.crop_mean = crop_mean            else:                self.crop_mean = np.zeros(self.crop_dims, dtype=np.float32)
7.此时,再运行detection.ipynb,应当一切正常,信息如下:

GPU modeWARNING: Logging before InitGoogleLogging() is written to STDERRW0710 15:26:55.556707  9306 _caffe.cpp:135] DEPRECATION WARNING - deprecated use of Python interfaceW0710 15:26:55.556727  9306 _caffe.cpp:136] Use this instead (with the named "weights" parameter):W0710 15:26:55.556730  9306 _caffe.cpp:138] Net('../models/bvlc_reference_rcnn_ilsvrc13/deploy.prototxt', 1, weights='../models/bvlc_reference_rcnn_ilsvrc13/bvlc_reference_rcnn_ilsvrc13.caffemodel')I0710 15:26:55.557760  9306 net.cpp:51] Initializing net from parameters: name: "R-CNN-ilsvrc13"state {  phase: TEST  level: 0}layer {  name: "data"  type: "Input"  top: "data"  input_param {    shape {      dim: 10      dim: 3      dim: 227      dim: 227    }  }}layer {  name: "conv1"  type: "Convolution"  bottom: "data"  top: "conv1"  convolution_param {    num_output: 96    kernel_size: 11    stride: 4  }}layer {  name: "relu1"  type: "ReLU"  bottom: "conv1"  top: "conv1"}layer {  name: "pool1"  type: "Pooling"  bottom: "conv1"  top: "pool1"  pooling_param {    pool: MAX    kernel_size: 3    stride: 2  }}layer {  name: "norm1"  type: "LRN"  bottom: "pool1"  top: "norm1"  lrn_param {    local_size: 5    alpha: 0.0001    beta: 0.75  }}layer {  name: "conv2"  type: "Convolution"  bottom: "norm1"  top: "conv2"  convolution_param {    num_output: 256    pad: 2    kernel_size: 5    group: 2  }}layer {  name: "relu2"  type: "ReLU"  bottom: "conv2"  top: "conv2"}layer {  name: "pool2"  type: "Pooling"  bottom: "conv2"  top: "pool2"  pooling_param {    pool: MAX    kernel_size: 3    stride: 2  }}layer {  name: "norm2"  type: "LRN"  bottom: "pool2"  top: "norm2"  lrn_param {    local_size: 5    alpha: 0.0001    beta: 0.75  }}layer {  name: "conv3"  type: "Convolution"  bottom: "norm2"  top: "conv3"  convolution_param {    num_output: 384    pad: 1    kernel_size: 3  }}layer {  name: "relu3"  type: "ReLU"  bottom: "conv3"  top: "conv3"}layer {  name: "conv4"  type: "Convolution"  bottom: "conv3"  top: "conv4"  convolution_param {    num_output: 384    pad: 1    kernel_size: 3    group: 2  }}layer {  name: "relu4"  type: "ReLU"  bottom: "conv4"  top: "conv4"}layer {  name: "conv5"  type: "Convolution"  bottom: "conv4"  top: "conv5"  convolution_param {    num_output: 256    pad: 1    kernel_size: 3    group: 2  }}layer {  name: "relu5"  type: "ReLU"  bottom: "conv5"  top: "conv5"}layer {  name: "pool5"  type: "Pooling"  bottom: "conv5"  top: "pool5"  pooling_param {    pool: MAX    kernel_size: 3    stride: 2  }}layer {  name: "fc6"  type: "InnerProduct"  bottom: "pool5"  top: "fc6"  inner_product_param {    num_output: 4096  }}layer {  name: "relu6"  type: "ReLU"  bottom: "fc6"  top: "fc6"}layer {  name: "drop6"  type: "Dropout"  bottom: "fc6"  top: "fc6"  dropout_param {    dropout_ratio: 0.5  }}layer {  name: "fc7"  type: "InnerProduct"  bottom: "fc6"  top: "fc7"  inner_product_param {    num_output: 4096  }}layer {  name: "relu7"  type: "ReLU"  bottom: "fc7"  top: "fc7"}layer {  name: "drop7"  type: "Dropout"  bottom: "fc7"  top: "fc7"  dropout_param {    dropout_ratio: 0.5  }}layer {  name: "fc-rcnn"  type: "InnerProduct"  bottom: "fc7"  top: "fc-rcnn"  inner_product_param {    num_output: 200  }}I0710 15:26:55.557831  9306 layer_factory.hpp:77] Creating layer dataI0710 15:26:55.557837  9306 net.cpp:84] Creating Layer dataI0710 15:26:55.557840  9306 net.cpp:380] data -> dataI0710 15:26:55.564779  9306 net.cpp:122] Setting up dataI0710 15:26:55.564796  9306 net.cpp:129] Top shape: 10 3 227 227 (1545870)I0710 15:26:55.564800  9306 net.cpp:137] Memory required for data: 6183480I0710 15:26:55.564805  9306 layer_factory.hpp:77] Creating layer conv1I0710 15:26:55.564812  9306 net.cpp:84] Creating Layer conv1I0710 15:26:55.564815  9306 net.cpp:406] conv1 <- dataI0710 15:26:55.564820  9306 net.cpp:380] conv1 -> conv1I0710 15:26:55.565482  9306 net.cpp:122] Setting up conv1I0710 15:26:55.565491  9306 net.cpp:129] Top shape: 10 96 55 55 (2904000)I0710 15:26:55.565493  9306 net.cpp:137] Memory required for data: 17799480I0710 15:26:55.565501  9306 layer_factory.hpp:77] Creating layer relu1I0710 15:26:55.565506  9306 net.cpp:84] Creating Layer relu1I0710 15:26:55.565508  9306 net.cpp:406] relu1 <- conv1I0710 15:26:55.565512  9306 net.cpp:367] relu1 -> conv1 (in-place)I0710 15:26:55.565517  9306 net.cpp:122] Setting up relu1I0710 15:26:55.565521  9306 net.cpp:129] Top shape: 10 96 55 55 (2904000)I0710 15:26:55.565523  9306 net.cpp:137] Memory required for data: 29415480I0710 15:26:55.565526  9306 layer_factory.hpp:77] Creating layer pool1I0710 15:26:55.565531  9306 net.cpp:84] Creating Layer pool1I0710 15:26:55.565533  9306 net.cpp:406] pool1 <- conv1I0710 15:26:55.565536  9306 net.cpp:380] pool1 -> pool1I0710 15:26:55.565560  9306 net.cpp:122] Setting up pool1I0710 15:26:55.565565  9306 net.cpp:129] Top shape: 10 96 27 27 (699840)I0710 15:26:55.565567  9306 net.cpp:137] Memory required for data: 32214840I0710 15:26:55.565568  9306 layer_factory.hpp:77] Creating layer norm1I0710 15:26:55.565574  9306 net.cpp:84] Creating Layer norm1I0710 15:26:55.565577  9306 net.cpp:406] norm1 <- pool1I0710 15:26:55.565579  9306 net.cpp:380] norm1 -> norm1I0710 15:26:55.565599  9306 net.cpp:122] Setting up norm1I0710 15:26:55.565609  9306 net.cpp:129] Top shape: 10 96 27 27 (699840)I0710 15:26:55.565613  9306 net.cpp:137] Memory required for data: 35014200I0710 15:26:55.565614  9306 layer_factory.hpp:77] Creating layer conv2I0710 15:26:55.565619  9306 net.cpp:84] Creating Layer conv2I0710 15:26:55.565621  9306 net.cpp:406] conv2 <- norm1I0710 15:26:55.565624  9306 net.cpp:380] conv2 -> conv2I0710 15:26:55.566407  9306 net.cpp:122] Setting up conv2I0710 15:26:55.566416  9306 net.cpp:129] Top shape: 10 256 27 27 (1866240)I0710 15:26:55.566417  9306 net.cpp:137] Memory required for data: 42479160I0710 15:26:55.566423  9306 layer_factory.hpp:77] Creating layer relu2I0710 15:26:55.566428  9306 net.cpp:84] Creating Layer relu2I0710 15:26:55.566431  9306 net.cpp:406] relu2 <- conv2I0710 15:26:55.566433  9306 net.cpp:367] relu2 -> conv2 (in-place)I0710 15:26:55.566437  9306 net.cpp:122] Setting up relu2I0710 15:26:55.566440  9306 net.cpp:129] Top shape: 10 256 27 27 (1866240)I0710 15:26:55.566442  9306 net.cpp:137] Memory required for data: 49944120I0710 15:26:55.566445  9306 layer_factory.hpp:77] Creating layer pool2I0710 15:26:55.566449  9306 net.cpp:84] Creating Layer pool2I0710 15:26:55.566452  9306 net.cpp:406] pool2 <- conv2I0710 15:26:55.566455  9306 net.cpp:380] pool2 -> pool2I0710 15:26:55.566478  9306 net.cpp:122] Setting up pool2I0710 15:26:55.566483  9306 net.cpp:129] Top shape: 10 256 13 13 (432640)I0710 15:26:55.566485  9306 net.cpp:137] Memory required for data: 51674680I0710 15:26:55.566488  9306 layer_factory.hpp:77] Creating layer norm2I0710 15:26:55.566493  9306 net.cpp:84] Creating Layer norm2I0710 15:26:55.566496  9306 net.cpp:406] norm2 <- pool2I0710 15:26:55.566500  9306 net.cpp:380] norm2 -> norm2I0710 15:26:55.566519  9306 net.cpp:122] Setting up norm2I0710 15:26:55.566524  9306 net.cpp:129] Top shape: 10 256 13 13 (432640)I0710 15:26:55.566526  9306 net.cpp:137] Memory required for data: 53405240I0710 15:26:55.566529  9306 layer_factory.hpp:77] Creating layer conv3I0710 15:26:55.566535  9306 net.cpp:84] Creating Layer conv3I0710 15:26:55.566537  9306 net.cpp:406] conv3 <- norm2I0710 15:26:55.566542  9306 net.cpp:380] conv3 -> conv3I0710 15:26:55.567703  9306 net.cpp:122] Setting up conv3I0710 15:26:55.567713  9306 net.cpp:129] Top shape: 10 384 13 13 (648960)I0710 15:26:55.567716  9306 net.cpp:137] Memory required for data: 56001080I0710 15:26:55.567724  9306 layer_factory.hpp:77] Creating layer relu3I0710 15:26:55.567728  9306 net.cpp:84] Creating Layer relu3I0710 15:26:55.567731  9306 net.cpp:406] relu3 <- conv3I0710 15:26:55.567736  9306 net.cpp:367] relu3 -> conv3 (in-place)I0710 15:26:55.567741  9306 net.cpp:122] Setting up relu3I0710 15:26:55.567744  9306 net.cpp:129] Top shape: 10 384 13 13 (648960)I0710 15:26:55.567746  9306 net.cpp:137] Memory required for data: 58596920I0710 15:26:55.567749  9306 layer_factory.hpp:77] Creating layer conv4I0710 15:26:55.567754  9306 net.cpp:84] Creating Layer conv4I0710 15:26:55.567757  9306 net.cpp:406] conv4 <- conv3I0710 15:26:55.567761  9306 net.cpp:380] conv4 -> conv4I0710 15:26:55.568743  9306 net.cpp:122] Setting up conv4I0710 15:26:55.568755  9306 net.cpp:129] Top shape: 10 384 13 13 (648960)I0710 15:26:55.568758  9306 net.cpp:137] Memory required for data: 61192760I0710 15:26:55.568763  9306 layer_factory.hpp:77] Creating layer relu4I0710 15:26:55.568768  9306 net.cpp:84] Creating Layer relu4I0710 15:26:55.568770  9306 net.cpp:406] relu4 <- conv4I0710 15:26:55.568775  9306 net.cpp:367] relu4 -> conv4 (in-place)I0710 15:26:55.568780  9306 net.cpp:122] Setting up relu4I0710 15:26:55.568783  9306 net.cpp:129] Top shape: 10 384 13 13 (648960)I0710 15:26:55.568785  9306 net.cpp:137] Memory required for data: 63788600I0710 15:26:55.568787  9306 layer_factory.hpp:77] Creating layer conv5I0710 15:26:55.568792  9306 net.cpp:84] Creating Layer conv5I0710 15:26:55.568795  9306 net.cpp:406] conv5 <- conv4I0710 15:26:55.568799  9306 net.cpp:380] conv5 -> conv5I0710 15:26:55.569517  9306 net.cpp:122] Setting up conv5I0710 15:26:55.569525  9306 net.cpp:129] Top shape: 10 256 13 13 (432640)I0710 15:26:55.569528  9306 net.cpp:137] Memory required for data: 65519160I0710 15:26:55.569535  9306 layer_factory.hpp:77] Creating layer relu5I0710 15:26:55.569540  9306 net.cpp:84] Creating Layer relu5I0710 15:26:55.569543  9306 net.cpp:406] relu5 <- conv5I0710 15:26:55.569547  9306 net.cpp:367] relu5 -> conv5 (in-place)I0710 15:26:55.569552  9306 net.cpp:122] Setting up relu5I0710 15:26:55.569556  9306 net.cpp:129] Top shape: 10 256 13 13 (432640)I0710 15:26:55.569558  9306 net.cpp:137] Memory required for data: 67249720I0710 15:26:55.569561  9306 layer_factory.hpp:77] Creating layer pool5I0710 15:26:55.569566  9306 net.cpp:84] Creating Layer pool5I0710 15:26:55.569568  9306 net.cpp:406] pool5 <- conv5I0710 15:26:55.569572  9306 net.cpp:380] pool5 -> pool5I0710 15:26:55.569597  9306 net.cpp:122] Setting up pool5I0710 15:26:55.569602  9306 net.cpp:129] Top shape: 10 256 6 6 (92160)I0710 15:26:55.569604  9306 net.cpp:137] Memory required for data: 67618360I0710 15:26:55.569607  9306 layer_factory.hpp:77] Creating layer fc6I0710 15:26:55.569614  9306 net.cpp:84] Creating Layer fc6I0710 15:26:55.569617  9306 net.cpp:406] fc6 <- pool5I0710 15:26:55.569622  9306 net.cpp:380] fc6 -> fc6I0710 15:26:55.617084  9306 net.cpp:122] Setting up fc6I0710 15:26:55.617112  9306 net.cpp:129] Top shape: 10 4096 (40960)I0710 15:26:55.617117  9306 net.cpp:137] Memory required for data: 67782200I0710 15:26:55.617127  9306 layer_factory.hpp:77] Creating layer relu6I0710 15:26:55.617137  9306 net.cpp:84] Creating Layer relu6I0710 15:26:55.617143  9306 net.cpp:406] relu6 <- fc6I0710 15:26:55.617151  9306 net.cpp:367] relu6 -> fc6 (in-place)I0710 15:26:55.617159  9306 net.cpp:122] Setting up relu6I0710 15:26:55.617164  9306 net.cpp:129] Top shape: 10 4096 (40960)I0710 15:26:55.617167  9306 net.cpp:137] Memory required for data: 67946040I0710 15:26:55.617172  9306 layer_factory.hpp:77] Creating layer drop6I0710 15:26:55.617178  9306 net.cpp:84] Creating Layer drop6I0710 15:26:55.617182  9306 net.cpp:406] drop6 <- fc6I0710 15:26:55.617187  9306 net.cpp:367] drop6 -> fc6 (in-place)I0710 15:26:55.617215  9306 net.cpp:122] Setting up drop6I0710 15:26:55.617221  9306 net.cpp:129] Top shape: 10 4096 (40960)I0710 15:26:55.617225  9306 net.cpp:137] Memory required for data: 68109880I0710 15:26:55.617229  9306 layer_factory.hpp:77] Creating layer fc7I0710 15:26:55.617235  9306 net.cpp:84] Creating Layer fc7I0710 15:26:55.617239  9306 net.cpp:406] fc7 <- fc6I0710 15:26:55.617244  9306 net.cpp:380] fc7 -> fc7I0710 15:26:55.645105  9306 net.cpp:122] Setting up fc7I0710 15:26:55.645131  9306 net.cpp:129] Top shape: 10 4096 (40960)I0710 15:26:55.645136  9306 net.cpp:137] Memory required for data: 68273720I0710 15:26:55.645146  9306 layer_factory.hpp:77] Creating layer relu7I0710 15:26:55.645155  9306 net.cpp:84] Creating Layer relu7I0710 15:26:55.645159  9306 net.cpp:406] relu7 <- fc7I0710 15:26:55.645164  9306 net.cpp:367] relu7 -> fc7 (in-place)I0710 15:26:55.645170  9306 net.cpp:122] Setting up relu7I0710 15:26:55.645174  9306 net.cpp:129] Top shape: 10 4096 (40960)I0710 15:26:55.645176  9306 net.cpp:137] Memory required for data: 68437560I0710 15:26:55.645179  9306 layer_factory.hpp:77] Creating layer drop7I0710 15:26:55.645184  9306 net.cpp:84] Creating Layer drop7I0710 15:26:55.645186  9306 net.cpp:406] drop7 <- fc7I0710 15:26:55.645190  9306 net.cpp:367] drop7 -> fc7 (in-place)I0710 15:26:55.645210  9306 net.cpp:122] Setting up drop7I0710 15:26:55.645215  9306 net.cpp:129] Top shape: 10 4096 (40960)I0710 15:26:55.645217  9306 net.cpp:137] Memory required for data: 68601400I0710 15:26:55.645221  9306 layer_factory.hpp:77] Creating layer fc-rcnnI0710 15:26:55.645227  9306 net.cpp:84] Creating Layer fc-rcnnI0710 15:26:55.645231  9306 net.cpp:406] fc-rcnn <- fc7I0710 15:26:55.645236  9306 net.cpp:380] fc-rcnn -> fc-rcnnI0710 15:26:55.646569  9306 net.cpp:122] Setting up fc-rcnnI0710 15:26:55.646595  9306 net.cpp:129] Top shape: 10 200 (2000)I0710 15:26:55.646598  9306 net.cpp:137] Memory required for data: 68609400I0710 15:26:55.646606  9306 net.cpp:200] fc-rcnn does not need backward computation.I0710 15:26:55.646610  9306 net.cpp:200] drop7 does not need backward computation.I0710 15:26:55.646613  9306 net.cpp:200] relu7 does not need backward computation.I0710 15:26:55.646615  9306 net.cpp:200] fc7 does not need backward computation.I0710 15:26:55.646618  9306 net.cpp:200] drop6 does not need backward computation.I0710 15:26:55.646621  9306 net.cpp:200] relu6 does not need backward computation.I0710 15:26:55.646623  9306 net.cpp:200] fc6 does not need backward computation.I0710 15:26:55.646626  9306 net.cpp:200] pool5 does not need backward computation.I0710 15:26:55.646628  9306 net.cpp:200] relu5 does not need backward computation.I0710 15:26:55.646631  9306 net.cpp:200] conv5 does not need backward computation.I0710 15:26:55.646634  9306 net.cpp:200] relu4 does not need backward computation.I0710 15:26:55.646637  9306 net.cpp:200] conv4 does not need backward computation.I0710 15:26:55.646639  9306 net.cpp:200] relu3 does not need backward computation.I0710 15:26:55.646642  9306 net.cpp:200] conv3 does not need backward computation.I0710 15:26:55.646646  9306 net.cpp:200] norm2 does not need backward computation.I0710 15:26:55.646648  9306 net.cpp:200] pool2 does not need backward computation.I0710 15:26:55.646651  9306 net.cpp:200] relu2 does not need backward computation.I0710 15:26:55.646656  9306 net.cpp:200] conv2 does not need backward computation.I0710 15:26:55.646661  9306 net.cpp:200] norm1 does not need backward computation.I0710 15:26:55.646663  9306 net.cpp:200] pool1 does not need backward computation.I0710 15:26:55.646667  9306 net.cpp:200] relu1 does not need backward computation.I0710 15:26:55.646672  9306 net.cpp:200] conv1 does not need backward computation.I0710 15:26:55.646674  9306 net.cpp:200] data does not need backward computation.I0710 15:26:55.646677  9306 net.cpp:242] This network produces output fc-rcnnI0710 15:26:55.646690  9306 net.cpp:255] Network initialization done.I0710 15:26:55.735093  9306 upgrade_proto.cpp:53] Attempting to upgrade input file specified using deprecated V1LayerParameter: ../models/bvlc_reference_rcnn_ilsvrc13/bvlc_reference_rcnn_ilsvrc13.caffemodelI0710 15:26:55.855332  9306 upgrade_proto.cpp:61] Successfully upgraded file specified using deprecated V1LayerParameterI0710 15:26:55.856111  9306 upgrade_proto.cpp:67] Attempting to upgrade input file specified using deprecated input fields: ../models/bvlc_reference_rcnn_ilsvrc13/bvlc_reference_rcnn_ilsvrc13.caffemodelI0710 15:26:55.856118  9306 upgrade_proto.cpp:70] Successfully upgraded file specified using deprecated input fields.W0710 15:26:55.856122  9306 upgrade_proto.cpp:72] Note that future Caffe releases will only support input layers and not input fields.Loading input...selective_search_rcnn({'/home/ljj/deep_learning/caffe-master/examples/images/fish-bike.jpg'}, '/tmp/tmp9O84If.mat')Processed 1565 windows in 47.733 s./home/ljj/.local/lib/python2.7/site-packages/pandas/core/generic.py:1138: PerformanceWarning: your performance may suffer as PyTables will pickle object types that it cannotmap directly to c-types [inferred_type->mixed,key->block1_values] [items->['prediction']]  return pytables.to_hdf(path_or_buf, key, self, **kwargs)Saved to _temp/det_output.h5 in 0.035 s.
8.目前只做到了这里,解决上述问题,花了很多时间,故做记录

参考:

http://blog.csdn.net/thystar/article/details/50727830