caffe相关--Layers
来源:互联网 发布:网络上砌砖的美女 编辑:程序博客网 时间:2024/05/22 13:34
http://caffe.berkeleyvision.org/tutorial/layers.html
Google Protocol Buffers 简介(一):http://www.jianshu.com/p/7de98349cadd
Data Layers
Data enters Caffe through data layers: they lie at the bottom of nets.
Note that the Python Layer can be useful for create custom data layers.
import caffeimport numpy as npclass EuclideanLossLayer(caffe.Layer): """ Compute the Euclidean Loss in the same manner as the C++ EuclideanLossLayer to demonstrate the class interface for developing layers in Python. """ def setup(self, bottom, top): # check input pair if len(bottom) != 2: raise Exception("Need two inputs to compute distance.") def reshape(self, bottom, top): # check input dimensions match if bottom[0].count != bottom[1].count: raise Exception("Inputs must have the same dimension.") # difference is shape of inputs self.diff = np.zeros_like(bottom[0].data, dtype=np.float32) # loss output is scalar top[0].reshape(1) def forward(self, bottom, top): self.diff[...] = bottom[0].data - bottom[1].data top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2. def backward(self, top, propagate_down, bottom): for i in range(2): if not propagate_down[i]: continue if i == 0: sign = 1 else: sign = -1 bottom[i].diff[...] = sign * self.diff / bottom[i].num
propagate_down: const vector< bool > &
a vector with equal length to bottom, with each index indicating whether to propagate the error gradients down to the bottom blob at the corresponding index
http://www.jianshu.com/p/622b8bb24589
reshape函数中里面必须把top的shape给定下来,比如说,这是一个loss layer,那么这个层的输出就是一个loss值,所以这个top的shape就是1*1,所以就这样写:
top[0].reshape(1,1)
因为这里比较简单,就不啰嗦了,只是比较奇怪的是,为什么反馈的时候会对标签y做偏导计算呢?好奇怪!!当然如果不要将y理解成标签,将这种欧氏loss理解层和上面介绍的loss同样的效果,那么对y求偏导也是可以理解的。
Vision Layers
Vision layers usually take images as input and produce other images as output, although they can take data of other types and dimensions.
Convolution Layer
Input:
n * c_i * h_i * w_i
Output:
n * c_o * h_o * w_o, where h_o = (h_i + 2 * pad_h - kernel_h) / stride_h + 1 and w_o likewise.
学习率 权值衰减相关
http://yufeigan.github.io/2014/11/29/Deep-Learning-%E4%BC%98%E5%8C%96%E6%96%B9%E6%B3%95%E6%80%BB%E7%BB%93/
权值初始化:http://www.jianshu.com/p/03009cfdf733
CPU implementation: ./src/caffe/layers/conv_layer.cpp:
https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cpp#L61
this指针:
this 是 C++ 中的一个关键字,也是一个 const 指针,它指向当前对象,通过它可以访问当前对象的所有成员。
所谓当前对象,是指正在使用的对象。例如对于stu.show();,stu 就是当前对象,this 就指向 stu。
Blob< int > kernel_shape_
The spatial dimensions of a filter kernel.
http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1BaseConvolutionLayer.html
- caffe相关--Layers
- caffe Layers相关的API
- caffe layers
- CAFFE layers
- Caffe Layers
- Caffe Layers
- caffe 相关--Blobs, Layers, and Nets: anatomy of a Caffe model
- Caffe学习:Layers
- Notes on Caffe layers
- CAFFE: Developing new layers
- 添加caffe layers
- Caffe学习:Layers
- caffe中的layers
- Caffe学习:Layers
- Caffe学习:Layers
- Caffe学习:Layers
- caffe中的layers
- Caffe学习:Layers
- Java导出Excel表,POI实现自适应宽度
- C#WinFrom开发系列之关于动态添加生成和删除控件的相关知识
- 令牌桶算法限流
- 浏览器的用户代理(User-Agent)
- jdbc.propties后面一定不要有空格,否则容易错,还不容易找到
- caffe相关--Layers
- 线索二叉树(Binary Thread Tree)
- jsonp详解
- 没有谁是一座孤岛——《岛上书店》
- 用stanford nlp的classfier组件的Java API做文本分类
- 程序员,你为什么值这么多钱?
- Python Hook
- HDOJ 1312 Red and Black (简单dfs)
- Android Studio库Module引用aar文件