CRF as RNN 代码解读

来源:互联网 发布:卡斯特的复仇知乎 编辑:程序博客网 时间:2024/05/21 19:41

论文:http://www.robots.ox.ac.uk/~szheng/papers/CRFasRNN.pdf,
CRF as RNN论文的代码在https://github.com/torrvision/crfasrnn可以找到。
有一个在线的demo可以演示http://www.robots.ox.ac.uk/~szheng/crfasrnndemo

这篇博文主要是记录自己对CRF as RNN中的 MultiStageMeanfieldLayer 的解读。涉及到的文件有multi_stage_meanfield的头文件与实现、meanfield的头文件与实现。

这个代码是基于老版本的caffe,大部分的层的头文件都在vision_layers.hpp中,
对应的位置是class MultiStageMeanfieldLayer 和 class MeanfieldIteration,比较简单,MultiStageMeanfieldLayer才是真正的层,而MeanfieldIteration是一个辅助类,直接看实现。

层运算的入口便是LayerSetUp,前面都是成员变量的初始化,接着是读取spatial.par和bilateral.par。 然后是计算spatial_kernel,直接调用了
compute_spatial_kernel()函数:

template <typename Dtype>void MultiStageMeanfieldLayer<Dtype>::compute_spatial_kernel(float* const output_kernel) {  for (int p = 0; p < num_pixels_; ++p) {    output_kernel[2*p] = static_cast<float>(p % width_) / theta_gamma_;    output_kernel[2*p + 1] = static_cast<float>(p / width_) / theta_gamma_;  }}

这个功能很简单,就是用一个2倍于像素点个数的矩阵,存储 (列/theta_gamma_,行/theta_gamma_)的kernel.
接下来就是将spatial_lattice_初始化。然后将后面计算需要的一元项先分配内存。由于需要使用多次的meanfield,所以接下来就为每个meanfield进行了一次初始化。就这样,层就可以启动了。

接下来就是Forward_cpu

/** * Performs filter-based mean field inference given the image and unaries. * * bottom[0] - Unary terms * bottom[1] - Softmax input/Output from the previous iteration (a copy of the unary terms if this is the first stage). * bottom[2] - RGB images * * top[0] - Output of the mean field inference (not normalized). */template <typename Dtype>void MultiStageMeanfieldLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,      const vector<Blob<Dtype>*>& top) {  split_layer_bottom_vec_[0] = bottom[0];  split_layer_->Forward(split_layer_bottom_vec_, split_layer_top_vec_);  // Initialize the bilateral lattices.  bilateral_lattices_.resize(num_);  for (int n = 0; n < num_; ++n) {    compute_bilateral_kernel(bottom[2], n, bilateral_kernel_buffer_.get());    bilateral_lattices_[n].reset(new ModifiedPermutohedral());    bilateral_lattices_[n]->init(bilateral_kernel_buffer_.get(), 5, num_pixels_);    // Calculate bilateral filter normalization factors.    Dtype* norm_output_data = bilateral_norms_.mutable_cpu_data() + bilateral_norms_.offset(n);    bilateral_lattices_[n]->compute(norm_output_data, norm_feed_.get(), 1);    for (int i = 0; i < num_pixels_; ++i) {      norm_output_data[i] = 1.f / (norm_output_data[i] + 1e-20f);    }  }  for (int i = 0; i < num_iterations_; ++i) {    meanfield_iterations_[i]->PrePass(this->blobs_, &bilateral_lattices_, &bilateral_norms_);    meanfield_iterations_[i]->Forward_cpu();  }}

功能就是让前面的多次meanfield每一个跑一次。

下面是Backward_cpu()

/** * Backprop through filter-based mean field inference. */template<typename Dtype>void MultiStageMeanfieldLayer<Dtype>::Backward_cpu(    const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down,    const vector<Blob<Dtype>*>& bottom) {  for (int i = (num_iterations_ - 1); i >= 0; --i) {    meanfield_iterations_[i]->Backward_cpu();  }  vector<bool> split_layer_propagate_down(1, true);  split_layer_->Backward(split_layer_top_vec_, split_layer_propagate_down, split_layer_bottom_vec_);  // Accumulate diffs from mean field iterations.  for (int blob_id = 0; blob_id < this->blobs_.size(); ++blob_id) {    Blob<Dtype>* cur_blob = this->blobs_[blob_id].get();    if (this->param_propagate_down_[blob_id]) {      caffe_set(cur_blob->count(), Dtype(0), cur_blob->mutable_cpu_diff());      for (int i = 0; i < num_iterations_; ++i) {        const Dtype* diffs_to_add = meanfield_iterations_[i]->blobs()[blob_id]->cpu_diff();        caffe_axpy(cur_blob->count(), Dtype(1.), diffs_to_add, cur_blob->mutable_cpu_diff());      }    }  }}

开始就是让每个MeanfieldIteration进行一个Backward_cpu。然后有两个for循环,第一个就是循环所有的blob,第二个就是把每个blob的所有迭代时的diff相加,放到对应blob的diff中。

PS:

  • 有关caffe数学计算的,可以在math_functions中找到,也可以看看http://www.cnblogs.com/jianyingzhou/p/4444728.html。
  • 关于cblas计算的内容,可以参考http://www.math.utah.edu/software/lapack/lapack-blas.html
  • Blob的解读http://www.tuicool.com/articles/6rUVNf2
  • CRF可以参考这篇http://blog.csdn.net/thesby/article/details/50969788

————————————————-我是分割线2016.06.23———————————————————————————–
我把这个版本的caffe已经merge到了最新的官方版caffe,因为它的原始版本实在太老了。下载地址在此.

1 0
原创粉丝点击