caffe学习笔记10 -- Fine-tuning a Pretrained Network for Style Recognitio

来源:互联网 发布:淘宝静物拍摄灯光布置 编辑:程序博客网 时间:2024/06/05 08:06

这是caffe官方文档Notebook Examples中的第四个例子,链接地址:http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/03-fine-tuning.ipynb

这个实例用于在与训练好的网络上微调flickr_style数据。用已经训练好的caffe网络微调自己的数据。这种方法的好处在于,与训练网络从大量的图片数据集中学习而来,其中间层可以捕获一般视觉表现的“语义”, 可以将其看做一个包含强大特征的黑盒子,我们仅需要几层就能获得好的数据特征。

首先,我们需要保存数据,包含如下几步:

  • 获取ImageNet ilsvrc与训练的模型
  • 下载Flickr style数据集的一个子集
  • 将下载的数据编译为caffe可以使用的格式

1. 导入程序需要的包:

import oscaffe_root = '/home/sindyz/caffe-master/'os.chdir(caffe_root)import syssys.path.insert(0,'./python')import caffeimport numpy as npfrom pylab import *

2. 导入数据和模型

# This downloads the ilsvrc auxiliary data (mean file, etc),# and a subset of 2000 images for the style recognition task.!data/ilsvrc12/get_ilsvrc_aux.sh!scripts/download_model_binary.py models/bvlc_reference_caffenet!python examples/finetune_flickr_style/assemble_data.py \    --workers=-1 --images=2000 --seed=1701 --label=5

3. 比较一下两种模型的不同

!diff models/bvlc_reference_caffenet/train_val.prototxt models/finetune_flickr_style/train_val.prototxt  

输出这里省略


4. 用python学习,比较微调后的结果与直接训练的结果

niter = 200# losses will also be stored in the logtrain_loss = np.zeros(niter)scratch_train_loss = np.zeros(niter)caffe.set_device(0)caffe.set_mode_gpu()# We create a solver that fine-tunes from a previously trained network.solver = caffe.SGDSolver('models/finetune_flickr_style/solver.prototxt')solver.net.copy_from('models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')# For reference, we also create a solver that does no finetuning.scratch_solver = caffe.SGDSolver('models/finetune_flickr_style/solver.prototxt')# We run the solver for niter times, and record the training loss.for it in range(niter):    solver.step(1)  # SGD by Caffe    scratch_solver.step(1)    # store the train loss    train_loss[it] = solver.net.blobs['loss'].data    scratch_train_loss[it] = scratch_solver.net.blobs['loss'].data    if it % 10 == 0:        print 'iter %d, finetune_loss=%f, scratch_loss=%f' % (it, train_loss[it], scratch_train_loss[it])print 'done'

输出省略。。。


5. 查看训练损失


可见,微调方法产生的损失波动平滑,而且比直接使用模型的损失小。


6. 将较小值部分放大:

plot(np.vstack([train_loss, scratch_train_loss]).clip(0, 4).T)


7. 查看经过200次迭代后,测试准确率,我们看到分类任务中有5个类别,随机测试的准确率为20%,与我们预期的一样,微调的结果要好于直接使用模型的结果。

test_iters = 10accuracy = 0scratch_accuracy = 0for it in arange(test_iters):    solver.test_nets[0].forward()    accuracy += solver.test_nets[0].blobs['accuracy'].data    scratch_solver.test_nets[0].forward()    scratch_accuracy += scratch_solver.test_nets[0].blobs['accuracy'].dataaccuracy /= test_itersscratch_accuracy /= test_itersprint 'Accuracy for fine-tuning:', accuracyprint 'Accuracy for training from scratch:', scratch_accuracy

Accuracy for fine-tuning: 0.547999998927

Accuracy for training from scratch: 0.218000002205


 

参考资料:

http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/03-fine-tuning.ipynb




0 0
原创粉丝点击