tensorflow学习笔记(十二):Normalization
来源:互联网 发布:c语言界面的汉化 编辑:程序博客网 时间:2024/06/05 08:03
Normalization
local_response_normalization
local_response_normalization出现在论文”ImageNet Classification with deep Convolutional Neural Networks”中,论文中说,这种normalization对于泛化是有好处的.
经过了一个conv2d或pooling后,我们获得了[batch_size, height, width, channels]这样一个tensor.现在,将channels称之为层,不考虑batch_size
-
-
-
-
- 可以看出,这个函数的功能就是,
在alexnet中,
tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None)'''Local Response Normalization.The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius. In detail,'''"""input: A Tensor. Must be one of the following types: float32, half. 4-D.depth_radius: An optional int. Defaults to 5. 0-D. Half-width of the 1-D normalization window.bias: An optional float. Defaults to 1. An offset (usually positive to avoid dividing by 0).alpha: An optional float. Defaults to 1. A scale factor, usually positive.beta: An optional float. Defaults to 0.5. An exponent.name: A name for the operation (optional)."""
- depth_radius: 就是公式里的
n/2 - bias : 公式里的
k - input: 将conv2d或pooling 的输出输入就行了[batch_size, height, width, channels]
- return :[batch_size, height, width, channels], 正则化后
batch_normalization
论文地址
batch_normalization, 故名思意,就是以batch为单位进行normalization
- 输入:mini_batch:
-
-
- 输出:
算法如下:
(1)mini_batch mean:
(2)mini_batch variance
(3)Normalize
(4)scale and shift
可以看出,batch_normalization之后,数据的维数没有任何变化,只是数值发生了变化
函数:
tf.nn.batch_normalization()
def batch_normalization(x, mean, variance, offset, scale, variance_epsilon, name=None):
Args:
- x: Input Tensor
of arbitrary dimensionality.
- mean: A mean Tensor
.
- variance: A variance Tensor
.
- offset: An offset Tensor
, often denoted
- scale: A scale Tensor
, often denoted None
. If present, the scale is applied to the normalized tensor.
- variance_epsilon: A small float number to avoid dividing by 0.
- name: A name for this operation (optional).
- Returns: the normalized, scaled, offset tensor.
对于卷积,x:[bathc,height,width,depth]
对于卷积,我们要feature map中共享
现在,我们需要一个函数 返回mean和variance, 看下面.
tf.nn.moments()
def moments(x, axes, shift=None, name=None, keep_dims=False):# for simple batch normalization pass `axes=[0]` (batch only).
对于卷积的batch_normalization, x 为[batch_size, height, width, depth],axes=[0,1,2],就会输出(mean,variance), mean 与 variance 均为标量。
- tensorflow学习笔记(十二):Normalization
- tensorflow 学习笔记15 Batch Normalization实例
- [TensorFlow 学习笔记-05]批标准化(Bacth Normalization,BN)
- tensorflow学习笔记(二十二):Supervisor
- TensorFlow学习笔记(十二)TensorFLow tensorBoard 总结
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- Batch Normalization 学习笔记
- 获取毫秒时间戳
- 实用!开发者的 Vim 插件(一)
- STL源码分析(2) -- list.h分析(1)
- 领悟自定义风采,ScrollView源码完全解析
- 1068. Find More Coins
- tensorflow学习笔记(十二):Normalization
- 选择排序
- 复杂声明的理解1
- 自动文档摘要评价方法:Edmundson,ROUGE
- 静态库与动态库的创建与使用
- 关于直播技术的转载
- android demo(三):simpleAdapter
- redis + spring 配置
- 【洛谷 P1710】地铁涨价(dfs+bfs)