[深度学习论文笔记][Weight Initialization] Delving deep into rectifiers: Surpassing human-level performance

来源:互联网 发布:沙发品牌推荐 知乎 编辑:程序博客网 时间:2024/05/16 17:37

He, Kaiming, et al. “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.” Proceedings of the IEEE International Conference on Computer Vision. 2015. [Citations: 477].


1 PReLU

[PReLU]


• α is a learnable parameter.
• If α is a fixed small number, PReLU becomes Leaky ReLU (LReLU), but LReLU has negligible impact on accuracy compared with ReLU.
• We allow the α to vary on different channels.


[Backprop]


[Optimization] Do not use weight decay (l_2 regularization) for α_d .
• A weight decay tends to push α d to zero, thus biases PReLU towards ReLU.
• We use α_d = 0.25 as the initialization.

[Experiment] Conv1 has coefficients (0.681 and 0.596) significantly greater than 0.
• Filters of conv1 are mostly Gabor-like filters such as edge or texture detectors.
• The learned results show that both positive and negative responses of the filters are respected.
The deeper conv layers in general have smaller coefficients.

• Activations gradually become “more nonlinear” at increasing depths.
• I.e., the learned model tends to keep more information in earlier stages and becomes more discriminative in deeper stages.

2 Weight Initialization
[Forward Case] Consider ReLU activation function.


Note if x has zero mean, then  . And we assume s has zero mean and has a symmetric distribution.


We want


then



[Backward Case]


We want


then



[Issue] When the input signal is not normalized (e.g., in [128, 128]) 

• Since the variance of the input signal can be roughly preserved from the first layer to the last.
• Its magnitude can be so large that the softmax operator will overflow. 


[Solution] Normalize the input signal, but this may impact other hyper-parameters. Another solution is to include a small factor on the weights
among all or some layers. E.g., use a std of 0.01 for the first two fc layers and 0.001 for the last.

0 0