Caffe学习之ProtoBuffer

来源:互联网 发布:无间道1的知乎 编辑:程序博客网 时间:2024/06/05 14:52

本文简单介绍一下caffe的protobuffer的机制

ProtocolBuffer是google开源的一种数据交换的方法,类似于xml等。在caffe里扮演着解析配置文件,传递消息和参数的作用。(便于规范序列化以及反序列化)

具体作用,以solver.prototxt为例,可以概括为一句话:

运行时,能够将solver.prototxt中的配置参数按照caffe.proto的协议解析并加载的到内存的solver对象的SolverParameter param_中去(同理net.prototxt)

这样就能够很方便管理参数,并且只需修改solver.protxt的参数大小即可。

具体来讲:

1.caffe.proto

caffe.proto 规定了数据结构,参数名。(其实可以简单的看做参数的申明)比如其中描述Solver的参数(其中又申明了NetParameter 等之类的变量,均在caffe.proto中定义):

message SolverParameter {  optional string net = 24;  optional NetParameter net_param = 25;  optional string train_net = 1; // Proto filename for the train net.  repeated string test_net = 2; // Proto filenames for the test nets.  optional NetParameter train_net_param = 21; // Inline train net params.  repeated NetParameter test_net_param = 22; // Inline test net params.  optional NetState train_state = 26;  repeated NetState test_state = 27;  // The number of iterations for each test net.  repeated int32 test_iter = 3;  // The number of iterations between two testing phases.  optional int32 test_interval = 4 [default = 0];  optional bool test_compute_loss = 19 [default = false];  // If true, run an initial test pass before the first iteration,  // ensuring memory availability and printing the starting value of the loss.  optional bool test_initialization = 32 [default = true];  optional float base_lr = 5; // The base learning rate  // the number of iterations between displaying info. If display = 0, no info  // will be displayed.  optional int32 display = 6;  // Display the loss averaged over the last average_loss iterations  optional int32 average_loss = 33 [default = 1];  optional int32 max_iter = 7; // the maximum number of iterations  // accumulate gradients over `iter_size` x `batch_size` instances  optional int32 iter_size = 36 [default = 1]; ... ... ...}

2.xxx.prototxt

我们知道训练一个网络需要定义一个net.prototxt和一个solver.prototxt。比如lenet_solver.prototxt:

# The train/test net protocol buffer definitionnet: "examples/mnist/lenet_train_test.prototxt"# test_iter specifies how many forward passes the test should carry out.# In the case of MNIST, we have test batch size 100 and 100 test iterations,# covering the full 10,000 testing images.test_iter: 100# Carry out testing every 500 training iterations.test_interval: 500# The base learning rate, momentum and the weight decay of the network.base_lr: 0.01momentum: 0.9weight_decay: 0.0005# The learning rate policylr_policy: "inv"gamma: 0.0001power: 0.75# Display every 100 iterationsdisplay: 100# The maximum number of iterationsmax_iter: 10000# snapshot intermediate resultssnapshot: 5000snapshot_prefix: "examples/mnist/lenet"# solver mode: CPU or GPUsolver_mode: GPU

solver.prototxt里面可以有申明参数,参数值的选项均在caffe.proto的SolverParameter中有所定义。
同理在写net.prototext的时候,也在caffe.proto中找到对应。

因此可想而知,如果要自定义一个Layer的话,需要在对应的caffe.proto里定义对应的Message等。这样运
行时参数才可以被解析

原创粉丝点击