Caffe官网 Tutorial: Nets, Layers, and Blobs caffe模型分解分析

来源:互联网 发布:苹果vip解析软件 编辑:程序博客网 时间:2024/05/16 10:06

http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html 官网地址        

总概况:

   caffe 计算模型:一层层的框架,从bottom 到 top 从输入数据到loss, 数据和梯度流通过 forward and backward passes 流动。

                             层的连接信息blobs 。  

               solver 用于模型配置和优化。 


一、 

Blob storage and communication


For example, in a 4D blob, the value at index (n, k, h, w) is physically located at index ((n * K + k) * H + h) * W + w.
因为 n 从0 开始,所以要加上 k 等
  • Number / N is the batch size of the data. Batch processing achieves better throughput for communication and device processing. For an ImageNet training batch of 256 images N = 256.
  • Channel / K is the feature dimension e.g. for RGB images K = 3.
二       

Implementation Details


Blob  存着 data 和 diff 
cpu 和 gpu 之间可以同步
const Dtype* cpu_data() const;Dtype* mutable_cpu_data()

If you want to check out when a Blob will copy data, here is an illustrative example:

// Assuming that data are on the CPU initially, and we have a blob.const Dtype* foo;Dtype* bar;foo = blob.gpu_data(); // data copied cpu->gpu.foo = blob.cpu_data(); // no data copied since both have up-to-date contents.bar = blob.mutable_gpu_data(); // no data copied.// ... some operations ...bar = blob.mutable_gpu_data(); // no data copied when we are still on GPU.foo = blob.cpu_data(); // data copied gpu->cpu, since the gpu side has modified the datafoo = blob.gpu_data(); // no data copied since both have up-to-date contentsbar = blob.mutable_cpu_data(); // still no data copied.bar = blob.mutable_gpu_data(); // data copied cpu->gpu.bar = blob.mutable_cpu_data(); // data copied gpu->cpu.



0 0
原创粉丝点击