caffe源码深入学习1:caffe.cpp解析

来源:互联网 发布:手机arp防火墙软件 编辑:程序博客网 时间:2024/05/20 11:22

        距离笔者接触深度学习已经将近半年了,在这段时间中,笔者最先接触的是lenet网络,然后就学习了2015-2016年非常火爆的fast-rcnn与faster-rcnn,到最近自己利用深度学习搞事情,笔者的最大感受是,经过一些例子的实践,已经对深度学习有了大概的了解,但是离熟练上手还有很可观的距离,这时,笔者不由得想起来一句老话:read the fxxx source code。因此,笔者开始学习caffe的源码,并打算将学习心得通过博文与大家分享。

   在写作博文的过程中,笔者作为一个初级菜鸟,难免会有一些错误与疏漏,欢迎各位读者朋友留言指正,笔者一定做到热情回复,积极讨论,各位的支持与热忱正是鼓舞笔者的动力,下面开始正篇!

   在初学者使用caffe训练神经网络的过程中,无论是参照现有的例子还是独立配置,往往都有以下几步:准备数据集->配置网络结构->配置网络训练参数->对网络进行训练->使用模型接口文件调用与使用训练完成的深度神经网络。那么在以上的过程中,由于caffe封装的规范性与层次性,我们是对caffe源码不完全了解的,而涉及更高层的任务,比如需要自己实现一个网络层,或者需要实现自己定义的损失函数,只熟悉caffe配置就不够了,那么,需要的是对caffe结构以及源码的了解,才能更深入地理解与利用caffe。那么,我们应该从什么地方开始入手掌握caffe呢?笔者建议从caffe.cpp开始出发,该文件的位置位于caffe目录下的./tools/caffe.cpp。每次我们启动脚本文件或者从命令行输入命令开始训练深度神经网络时,总是从这个文件开始对命令进行解析并执行,因此,caffe.cpp正是迷宫的入口,因此,笔者打算从caffe.cpp开始解析,下面贴出先贴出代码并给出笔者的注释。

#ifdef WITH_PYTHON_LAYER#include "boost/python.hpp"namespace bp = boost::python;#endif#include <gflags/gflags.h>#include <glog/logging.h>#include <cstring>#include <map>#include <string>#include <vector>#include "boost/algorithm/string.hpp"#include "caffe/caffe.hpp"#include "caffe/util/signal_handler.h"using caffe::Blob;using caffe::Caffe;using caffe::Net;using caffe::Layer;using caffe::Solver;using caffe::shared_ptr;using caffe::string;using caffe::Timer;using caffe::vector;using std::ostringstream;DEFINE_string(gpu, "",    "Optional; run in GPU mode on given device IDs separated by ','."    "Use '-gpu all' to run on all available GPUs. The effective training "    "batch size is multiplied by the number of devices.");DEFINE_string(solver, "",    "The solver definition protocol buffer text file.");DEFINE_string(model, "",    "The model definition protocol buffer text file.");DEFINE_string(phase, "",    "Optional; network phase (TRAIN or TEST). Only used for 'time'.");DEFINE_int32(level, 0,    "Optional; network level.");DEFINE_string(stage, "",    "Optional; network stages (not to be confused with phase), "    "separated by ','.");DEFINE_string(snapshot, "",    "Optional; the snapshot solver state to resume training.");DEFINE_string(weights, "",    "Optional; the pretrained weights to initialize finetuning, "    "separated by ','. Cannot be set simultaneously with snapshot.");DEFINE_int32(iterations, 50,    "The number of iterations to run.");DEFINE_string(sigint_effect, "stop",             "Optional; action to take when a SIGINT signal is received: "              "snapshot, stop or none.");DEFINE_string(sighup_effect, "snapshot",             "Optional; action to take when a SIGHUP signal is received: "             "snapshot, stop or none.");// A simple registry for caffe commands.typedef int (*BrewFunction)();typedef std::map<caffe::string, BrewFunction> BrewMap;BrewMap g_brew_map;/*下面的#define RegisterBrewFunction(func)宏定义的作用是将参数func转化为字符串,并存储在g_brew_map这个容器中,而func对应了四种值:train/test/time/device_query,这四个函数标志了四个功能的入口,请大家留意下方对应的那四个函数的尾部就可得知。*/#define RegisterBrewFunction(func) \namespace { \class __Registerer_##func { \ public: /* NOLINT */ \  __Registerer_##func() { \    g_brew_map[#func] = &func; \  } \}; \__Registerer_##func g_registerer_##func; \}static BrewFunction GetBrewFunction(const caffe::string& name) {  if (g_brew_map.count(name)) {    return g_brew_map[name];//这里返回了容器中与参数名称匹配的函数入口。  } else {    LOG(ERROR) << "Available caffe actions:";    for (BrewMap::iterator it = g_brew_map.begin();         it != g_brew_map.end(); ++it) {      LOG(ERROR) << "\t" << it->first;    }    LOG(FATAL) << "Unknown action: " << name;    return NULL;  // not reachable, just to suppress old compiler warnings.  }}// Parse GPU ids or use all available devicesstatic void get_gpus(vector<int>* gpus) {//在这里查询gpu的信息  if (FLAGS_gpu == "all") {    int count = 0;#ifndef CPU_ONLY    CUDA_CHECK(cudaGetDeviceCount(&count));#else    NO_GPU;#endif    for (int i = 0; i < count; ++i) {      gpus->push_back(i);    }  } else if (FLAGS_gpu.size()) {    vector<string> strings;    boost::split(strings, FLAGS_gpu, boost::is_any_of(","));    for (int i = 0; i < strings.size(); ++i) {      gpus->push_back(boost::lexical_cast<int>(strings[i]));    }  } else {    CHECK_EQ(gpus->size(), 0);  }}// Parse phase from flagscaffe::Phase get_phase_from_flags(caffe::Phase default_value) {  if (FLAGS_phase == "")    return default_value;  if (FLAGS_phase == "TRAIN")    return caffe::TRAIN;  if (FLAGS_phase == "TEST")    return caffe::TEST;  LOG(FATAL) << "phase must be \"TRAIN\" or \"TEST\"";  return caffe::TRAIN;  // Avoid warning}// Parse stages from flagsvector<string> get_stages_from_flags() {  vector<string> stages;  boost::split(stages, FLAGS_stage, boost::is_any_of(","));  return stages;}// caffe commands to call by//     caffe <command> <args>//// To add a command, define a function "int command()" and register it with// RegisterBrewFunction(action);// Device Query: show diagnostic information for a GPU device.int device_query() {  LOG(INFO) << "Querying GPUs " << FLAGS_gpu;  vector<int> gpus;  get_gpus(&gpus);  for (int i = 0; i < gpus.size(); ++i) {    caffe::Caffe::SetDevice(gpus[i]);    caffe::Caffe::DeviceQuery();  }  return 0;}RegisterBrewFunction(device_query);//如上文所说,RegisterBrewFunction将此函数入口添加进了g_brew_map// Load the weights from the specified caffemodel(s) into the train and// test nets.void CopyLayers(caffe::Solver<float>* solver, const std::string& model_list) {  std::vector<std::string> model_names;  boost::split(model_names, model_list, boost::is_any_of(",") );  for (int i = 0; i < model_names.size(); ++i) {    LOG(INFO) << "Finetuning from " << model_names[i];    solver->net()->CopyTrainedLayersFrom(model_names[i]);    for (int j = 0; j < solver->test_nets().size(); ++j) {      solver->test_nets()[j]->CopyTrainedLayersFrom(model_names[i]);    }  }}// Translate the signal effect the user specified on the command-line to the// corresponding enumeration.caffe::SolverAction::Enum GetRequestedAction(    const std::string& flag_value) {  if (flag_value == "stop") {    return caffe::SolverAction::STOP;  }  if (flag_value == "snapshot") {    return caffe::SolverAction::SNAPSHOT;  }  if (flag_value == "none") {    return caffe::SolverAction::NONE;  }  LOG(FATAL) << "Invalid signal effect \""<< flag_value << "\" was specified";}// Train / Finetune a model.//分析train()函数int train() {  //train()函数首先检测FLAGS_solver.size()是否为零,为零的话表示用户没有传入solver文件  CHECK_GT(FLAGS_solver.size(), 0) << "Need a solver definition to train.";  /*然后做的一件事就是检查参数里面--weights和--snapshot有没有同时出现,因为--weights是  在从头启动训练的时候需要的参数,表示对模型的finetune,而--snapshot表示的是继续训练模型,  这种情况对应于用户之前暂停了模型训练,现在继续训练。因此不再需要weight参数。*/  CHECK(!FLAGS_snapshot.size() || !FLAGS_weights.size())      << "Give a snapshot to resume training or weights to finetune "      "but not both.";  vector<string> stages = get_stages_from_flags();  //下面两行代码是去获取并解析用户定义的solver.prototxt  caffe::SolverParameter solver_param;  caffe::ReadSolverParamsFromTextFileOrDie(FLAGS_solver, &solver_param);  solver_param.mutable_train_state()->set_level(FLAGS_level);  for (int i = 0; i < stages.size(); i++) {    solver_param.mutable_train_state()->add_stage(stages[i]);  }    /*下面是去查询用户配置的GPU信息,用户可以在输入命令行的时候配置gpu信息,也可以在solver.prototxt  文件中定义GPU信息,如果用户在solver.prototxt里面配置了GPU的id,则将该id写入FLAGS_gpu中,如果用户  只是说明了使用gpu模式,而没有详细指定使用的gpu的id,则将gpu的id默认为0。*/  // If the gpus flag is not provided, allow the mode and device to be set  // in the solver prototxt.  if (FLAGS_gpu.size() == 0      && solver_param.solver_mode() == caffe::SolverParameter_SolverMode_GPU) {      if (solver_param.has_device_id()) {          FLAGS_gpu = "" +              boost::lexical_cast<string>(solver_param.device_id());      } else {  // Set default GPU if unspecified          FLAGS_gpu = "" + boost::lexical_cast<string>(0);      }  }  /*在以下部分核验gpu检测结果,如果没有gpu信息,那么则使用cpu训练,否则,就开始一些GPU训练的初始化工作*/  vector<int> gpus;  get_gpus(&gpus);  if (gpus.size() == 0) {    LOG(INFO) << "Use CPU.";    Caffe::set_mode(Caffe::CPU);  } else {    ostringstream s;    for (int i = 0; i < gpus.size(); ++i) {      s << (i ? ", " : "") << gpus[i];    }    LOG(INFO) << "Using GPUs " << s.str();#ifndef CPU_ONLY    cudaDeviceProp device_prop;    for (int i = 0; i < gpus.size(); ++i) {      cudaGetDeviceProperties(&device_prop, gpus[i]);      LOG(INFO) << "GPU " << gpus[i] << ": " << device_prop.name;    }#endif    solver_param.set_device_id(gpus[0]);    Caffe::SetDevice(gpus[0]);    Caffe::set_mode(Caffe::GPU);    Caffe::set_solver_count(gpus.size());  }  caffe::SignalHandler signal_handler(        GetRequestedAction(FLAGS_sigint_effect),        GetRequestedAction(FLAGS_sighup_effect));  /*下面就开始构造网络训练器solver,调用SolverRegistry的CreateSolver函数得到一个solver,在初始化solver的过程中,  使用了之前解析好的用户定义的solver.prototxt文件,solver负担了整个网络的训练责任,详细结构后话解析*/  shared_ptr<caffe::Solver<float> >      solver(caffe::SolverRegistry<float>::CreateSolver(solver_param));  solver->SetActionFunction(signal_handler.GetActionFunction());  /*在这里查询了一下用户有没有定义snapshot参数和weights参数,因为如果定义了这两个参数,代表用户可能会希望从之前的  中断训练处继续训练或者借用其他模型初始化网络,caffe在对两个参数相关的内容进行处理时都要用到solver指针*/  if (FLAGS_snapshot.size()) {    LOG(INFO) << "Resuming from " << FLAGS_snapshot;    solver->Restore(FLAGS_snapshot.c_str());  } else if (FLAGS_weights.size()) {    CopyLayers(solver.get(), FLAGS_weights);  }  /*如果有不止一块gpu参与训练,那么将开启多gpu训练模式*/  if (gpus.size() > 1) {    caffe::P2PSync<float> sync(solver, NULL, solver->param());    sync.Run(gpus);  } else {    LOG(INFO) << "Starting Optimization";/*使用Solve()接口正式开始优化网络*/    solver->Solve();  }  LOG(INFO) << "Optimization Done.";  return 0;}RegisterBrewFunction(train);// Test: score a model.int test() {  CHECK_GT(FLAGS_model.size(), 0) << "Need a model definition to score.";  CHECK_GT(FLAGS_weights.size(), 0) << "Need model weights to score.";  vector<string> stages = get_stages_from_flags();  // Set device id and mode  vector<int> gpus;  get_gpus(&gpus);  if (gpus.size() != 0) {    LOG(INFO) << "Use GPU with device ID " << gpus[0];#ifndef CPU_ONLY    cudaDeviceProp device_prop;    cudaGetDeviceProperties(&device_prop, gpus[0]);    LOG(INFO) << "GPU device name: " << device_prop.name;#endif    Caffe::SetDevice(gpus[0]);    Caffe::set_mode(Caffe::GPU);  } else {    LOG(INFO) << "Use CPU.";    Caffe::set_mode(Caffe::CPU);  }  // Instantiate the caffe net.  Net<float> caffe_net(FLAGS_model, caffe::TEST, FLAGS_level, &stages);  caffe_net.CopyTrainedLayersFrom(FLAGS_weights);  LOG(INFO) << "Running for " << FLAGS_iterations << " iterations.";  vector<int> test_score_output_id;  vector<float> test_score;  float loss = 0;  for (int i = 0; i < FLAGS_iterations; ++i) {    float iter_loss;    const vector<Blob<float>*>& result =        caffe_net.Forward(&iter_loss);    loss += iter_loss;    int idx = 0;    for (int j = 0; j < result.size(); ++j) {      const float* result_vec = result[j]->cpu_data();      for (int k = 0; k < result[j]->count(); ++k, ++idx) {        const float score = result_vec[k];        if (i == 0) {          test_score.push_back(score);          test_score_output_id.push_back(j);        } else {          test_score[idx] += score;        }        const std::string& output_name = caffe_net.blob_names()[            caffe_net.output_blob_indices()[j]];        LOG(INFO) << "Batch " << i << ", " << output_name << " = " << score;      }    }  }  loss /= FLAGS_iterations;  LOG(INFO) << "Loss: " << loss;  for (int i = 0; i < test_score.size(); ++i) {    const std::string& output_name = caffe_net.blob_names()[        caffe_net.output_blob_indices()[test_score_output_id[i]]];    const float loss_weight = caffe_net.blob_loss_weights()[        caffe_net.output_blob_indices()[test_score_output_id[i]]];    std::ostringstream loss_msg_stream;    const float mean_score = test_score[i] / FLAGS_iterations;    if (loss_weight) {      loss_msg_stream << " (* " << loss_weight                      << " = " << loss_weight * mean_score << " loss)";    }    LOG(INFO) << output_name << " = " << mean_score << loss_msg_stream.str();  }  return 0;}RegisterBrewFunction(test);// Time: benchmark the execution time of a model.int time() {  CHECK_GT(FLAGS_model.size(), 0) << "Need a model definition to time.";  caffe::Phase phase = get_phase_from_flags(caffe::TRAIN);  vector<string> stages = get_stages_from_flags();  // Set device id and mode  vector<int> gpus;  get_gpus(&gpus);  if (gpus.size() != 0) {    LOG(INFO) << "Use GPU with device ID " << gpus[0];    Caffe::SetDevice(gpus[0]);    Caffe::set_mode(Caffe::GPU);  } else {    LOG(INFO) << "Use CPU.";    Caffe::set_mode(Caffe::CPU);  }  // Instantiate the caffe net.  Net<float> caffe_net(FLAGS_model, phase, FLAGS_level, &stages);  // Do a clean forward and backward pass, so that memory allocation are done  // and future iterations will be more stable.  LOG(INFO) << "Performing Forward";  // Note that for the speed benchmark, we will assume that the network does  // not take any input blobs.  float initial_loss;  caffe_net.Forward(&initial_loss);  LOG(INFO) << "Initial loss: " << initial_loss;  LOG(INFO) << "Performing Backward";  caffe_net.Backward();  const vector<shared_ptr<Layer<float> > >& layers = caffe_net.layers();  const vector<vector<Blob<float>*> >& bottom_vecs = caffe_net.bottom_vecs();  const vector<vector<Blob<float>*> >& top_vecs = caffe_net.top_vecs();  const vector<vector<bool> >& bottom_need_backward =      caffe_net.bottom_need_backward();  LOG(INFO) << "*** Benchmark begins ***";  LOG(INFO) << "Testing for " << FLAGS_iterations << " iterations.";  Timer total_timer;  total_timer.Start();  Timer forward_timer;  Timer backward_timer;  Timer timer;  std::vector<double> forward_time_per_layer(layers.size(), 0.0);  std::vector<double> backward_time_per_layer(layers.size(), 0.0);  double forward_time = 0.0;  double backward_time = 0.0;  for (int j = 0; j < FLAGS_iterations; ++j) {    Timer iter_timer;    iter_timer.Start();    forward_timer.Start();    for (int i = 0; i < layers.size(); ++i) {      timer.Start();      layers[i]->Forward(bottom_vecs[i], top_vecs[i]);      forward_time_per_layer[i] += timer.MicroSeconds();    }    forward_time += forward_timer.MicroSeconds();    backward_timer.Start();    for (int i = layers.size() - 1; i >= 0; --i) {      timer.Start();      layers[i]->Backward(top_vecs[i], bottom_need_backward[i],                          bottom_vecs[i]);      backward_time_per_layer[i] += timer.MicroSeconds();    }    backward_time += backward_timer.MicroSeconds();    LOG(INFO) << "Iteration: " << j + 1 << " forward-backward time: "      << iter_timer.MilliSeconds() << " ms.";  }  LOG(INFO) << "Average time per layer: ";  for (int i = 0; i < layers.size(); ++i) {    const caffe::string& layername = layers[i]->layer_param().name();    LOG(INFO) << std::setfill(' ') << std::setw(10) << layername <<      "\tforward: " << forward_time_per_layer[i] / 1000 /      FLAGS_iterations << " ms.";    LOG(INFO) << std::setfill(' ') << std::setw(10) << layername  <<      "\tbackward: " << backward_time_per_layer[i] / 1000 /      FLAGS_iterations << " ms.";  }  total_timer.Stop();  LOG(INFO) << "Average Forward pass: " << forward_time / 1000 /    FLAGS_iterations << " ms.";  LOG(INFO) << "Average Backward pass: " << backward_time / 1000 /    FLAGS_iterations << " ms.";  LOG(INFO) << "Average Forward-Backward: " << total_timer.MilliSeconds() /    FLAGS_iterations << " ms.";  LOG(INFO) << "Total Time: " << total_timer.MilliSeconds() << " ms.";  LOG(INFO) << "*** Benchmark ends ***";  return 0;}RegisterBrewFunction(time);int main(int argc, char** argv) {  //主函数入口,首先进行gflags的一些初始化,设置并打印版本信息,用户信息等。  // Print output to stderr (while still logging).  FLAGS_alsologtostderr = 1;  // Set version  gflags::SetVersionString(AS_STRING(CAFFE_VERSION));  // Usage message.  gflags::SetUsageMessage("command line brew\n"      "usage: caffe <command> <args>\n\n"      "commands:\n"      "  train           train or finetune a model\n"      "  test            score a model\n"      "  device_query    show GPU diagnostic information\n"      "  time            benchmark model execution time");  // Run tool or show usage.  /*下面进行的是对gflags和glog的一些初始化,GlobalInit函数定义在了caffe安装目录./src/caffe/common.cpp中,  在下面贴出该函数的代码  void GlobalInit(int* pargc, char*** pargv) {  // Google flags.  ::gflags::ParseCommandLineFlags(pargc, pargv, true);  // Google logging.  ::google::InitGoogleLogging(*(pargv)[0]);  // Provide a backtrace on segfault.  ::google::InstallFailureSignalHandler();}在该函数中,ParseCommandLineFlags函数对gflags的参数进行了初始化,InitGoogleLogging函数初始化谷歌日志系统,而InstallFailureSignalHandler注册信号处理句柄*/  caffe::GlobalInit(&argc, &argv);  if (argc == 2) {#ifdef WITH_PYTHON_LAYER    try {#endif  /*上面完成了一些初始化工作,而真正的程序入口就是下面这个GetBrewFunction函数,这个函数的主要功能为去查找g_brew_map容器,  并在其中找到与caffe::string(argv[1])相匹配的函数并返回该函数的入口,那么,g_brew_map容器里面装的是什么呢?这个时候就要  看看上面的#define RegisterBrewFunction(func)。*/  /*在看完#define RegisterBrewFunction(func)之后,我们转向上文阅读一下GetBrewFunction的定义*/      return GetBrewFunction(caffe::string(argv[1]))();#ifdef WITH_PYTHON_LAYER    } catch (bp::error_already_set) {      PyErr_Print();      return 1;    }#endif  } else {    gflags::ShowUsageWithFlagsRestrict(argv[0], "tools/caffe");  }}

   以上的代码段给出了caffe.cpp的大致结构,我们可以看到,caffe.cpp文件对我们用户自定义的过各种参数文件进行了提取,初始化了各种文件,并提供了最重要的开始训练的接口,总体而言,代码的精干部分抽出来是以下结构:

mian函数->GetBrewFunction函数->train函数

   其中,首先从main函数出发,main函数里面在进行简短的对gflags与glog进行初始化以后,就开始进入了GetBrewFunction环节,在这个环节中,caffe要弄明白用户是要干什么?是要进行网络的训练,还是网络的测试,还是时间的测试或者对服务的查询,而搞清楚了用户想要干什么之后,就可以返回相应的函数接口进行操作了,在以上的代码中我们分析了最重要的train函数,里面进行了一系列对网络训练的初始化,并按照用户自己定义的solver.prototxt中配置的各种文件通过solve()接口进行网络的优化。

   还有一个需要注意的地方是,caffe架构中大量使用了gflags和glog,前者用于进行命令行参数的解析,而后者则是一个有效的日志记录工具,请大家在阅读caffe源码前对这两个工具作适量的了解。

   caffe.cpp解析到此告一段落,总的来说,caffe.cpp提供了对整体网络进行操作的接口,就像燎原的火星一样,从这个文件开始,整个caffe架构将逐步透明。

   笔者作为一个菜鸟,在写作博客时难免会有错误与疏漏,欢迎各位读者朋友批评指正,更盼望读者朋友能提出中肯的意见,笔者一定虚心接受!

   在文章的末尾贴出对笔者帮助比较大的两篇博客:

   1)一路颠簸:点击打开链接

   2)汤旭前辈的学习总结:点击打开链接

   欢迎阅读笔者后续解析caffe源码的博客,各位读者朋友的支持与鼓励是我最大的动力!



written by jiong

只有在当下不够努力,才会怀念过去

4 0
原创粉丝点击