Classifying ImageNet: the instant Caffe way

来源:互联网 发布:淘宝商品数据包下载 编辑:程序博客网 时间:2024/06/03 16:22

这是一个Caffe关于Classifying Imagenet的接口,做实验前需要先完成Caffe的编译,包括make all, make test, make runtest, make pycaffe。本文为了运行于本机,只做了细微修改和中文说明,原始文件来源于Caffe官网对应的Notebook Examples。http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/classification.ipynb

建议在实验前先运行 ./scripts/download_model_binary.py models/bvlc_reference_caffenet 完成caffemodel的下载,当然也可以执行第一步的命令实现下载,只是会比较慢而已。

---Last update 2015年6月5日

Setup

import numpy as npimport matplotlib.pyplot as plt%matplotlib inline# 切换工作目录到 caffe-master%cd '/home/ouxinyu/caffe-master'# Make sure that caffe is on the python path:caffe_root = './'  # this file is expected to be in {caffe_root}/examplesimport syssys.path.insert(0, caffe_root + 'python')import caffe# Set the right path to your model definition file, pretrained model weights, and the image you would like to classify.MODEL_FILE = 'models/bvlc_reference_caffenet/deploy.prototxt'PRETRAINED = 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'IMAGE_FILE = 'examples/images/cat.jpg'import osif not os.path.isfile(PRETRAINED):    print("Downloading pre-trained CaffeNet model...")    !scripts/download_model_binary.py models/bvlc_reference_caffenet
/home/ouxinyu/caffe-master

载入Caffe训练好的caffemodel模型

caffe.Classifier是caffe封装的载入网络模型的函数。需要调用配置文件deploy.prototxt,训练好的模型caffemodel,均值文件。此外,还需要将颜色由RGB转换为BGR,颜色规范到[0,255],并制定图像的大小(256, 256)。

net = caffe.Classifier(MODEL_FILE, PRETRAINED,                       mean=np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1),                       channel_swap=(2,1,0),                       raw_scale=255,                       image_dims=(256, 256))

使用CPU进行特征提取和可视化

为了对比,首先使用CPU模式进行运行

caffe.set_mode_cpu()
input_image = caffe.io.load_image(IMAGE_FILE)plt.imshow(input_image)
<matplotlib.image.AxesImage at 0x7f3c48472e90>

预测结果按照Imagenet2012的论文,默认执行10次预测,并求平均,包括Cropping the Center 、四个 Corners和它们的水平mirrored version。

预测结果的标签来源于默认的synset_words.txt文件,可以通过如下命令获取:~/caffe-master$ sh data/ilsvrc12/get_ilsvrc_aux.sh

prediction = net.predict([input_image])  # predict takes any number of images, and formats them for the Caffe net automaticallyprint 'prediction shape:', prediction[0].shapeplt.plot(prediction[0])print 'predicted class:', prediction[0].argmax()
prediction shape: (1000,)predicted class: 281

下面的代码测试,仅仅使用Cropping the Center,通过设置oversample=False,可以看到结果虽然是一样的,但是还是由细微的差别,可以想象的是,10点预测应该是可以适当改善预测性能。

prediction = net.predict([input_image], oversample=False)print 'prediction shape:', prediction[0].shapeplt.plot(prediction[0])print 'predicted class:', prediction[0].argmax()
prediction shape: (1000,)predicted class: 281

至于执行时间可以看到,10点预测比center only更长一些,不过并不是很多。作者官网用的Intel i5的CPU,执行10点预测的时间是355 ms。

%timeit net.predict([input_image],oversample=False)
1 loops, best of 3: 235 ms per loop
%timeit net.predict([input_image])
1 loops, best of 3: 236 ms per loop

使用GPU进行特征提取和可视化

下面探测执行single image classifying with 输入预处理:

# Resize the image to the standard (256, 256) and oversample net input sized crops.input_oversampled = caffe.io.oversample([caffe.io.resize_image(input_image, net.image_dims)], net.crop_dims)# 'data' is the input blob name in the model definition, so we preprocess for that input.caffe_input = np.asarray([net.transformer.preprocess('data', in_) for in_ in input_oversampled])# forward() takes keyword args for the input blobs with preprocessed input arrays.%timeit net.forward(data=caffe_input)
10 loops, best of 3: 204 ms per loop

下面切换到GPU模式来进行对比

caffe.set_mode_gpu()
prediction = net.predict([input_image])print 'prediction shape:', prediction[0].shapeplt.plot(prediction[0])print 'prediction class:', prediction[0].argmax()
prediction shape: (1000,)prediction class: 281

下面都是10点预测的结果,可以明显看到,GPU(GTX 770)比CPU(i7 4790k)快了大约4-5倍。

# Full pipeline timing.%timeit net.predict([input_image])
10 loops, best of 3: 43.1 ms per loop
# Forward pass timing.%timeit net.forward(data=caffe_input)
100 loops, best of 3: 17.4 ms per loop
原创粉丝点击