「Deep Learning」Adam

来源:互联网 发布:蜂窝数据应用只有两个 编辑:程序博客网 时间:2024/06/05 08:31
Sina Weibo:小锋子Shawn
Tencent E-mail:403568338@qq.com
http://blog.csdn.net/dgyuanshaofeng/article/details/78759165

    Adam,随机优化的算法之一,在TensorFlow和Pytorch中常用,在早期深度学习里面,我们使用Caffe还是常用SGD。也有道听途说,Adam跑通网络之后,应该使用SGD再跑一次,也就是认为SGD收敛的解好于Adam的解,但是Adam可以快速验证网络是否可用。
    Pytorch使用的Adam,其默认参数和论文给出的推荐参数基本一致。也就是,学习率lr为0.001,beta1为0.9,beta2为0.999,eps为1e-08。另外,默认不使用L2惩罚,也就是不使用weight decay。bete1为计算运行平均梯度的系数,而beta1为计算这个梯度的平方(square)的系数。
    torch.optim.adam的源代码。
import mathimport torchfrom .optimizer import Optimizerclass Adam(Optimizer):    """Implements Adam algorithm.    It has been proposed in `Adam: A Method for Stochastic Optimization`_.    Arguments:        params (iterable): iterable of parameters to optimize or dicts defining            parameter groups        lr (float, optional): learning rate (default: 1e-3)        betas (Tuple[float, float], optional): coefficients used for computing            running averages of gradient and its square (default: (0.9, 0.999))        eps (float, optional): term added to the denominator to improve            numerical stability (default: 1e-8)        weight_decay (float, optional): weight decay (L2 penalty) (default: 0)    .. _Adam\: A Method for Stochastic Optimization:        https://arxiv.org/abs/1412.6980    """    def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8,                 weight_decay=0):        defaults = dict(lr=lr, betas=betas, eps=eps,                        weight_decay=weight_decay)        super(Adam, self).__init__(params, defaults)    def step(self, closure=None):        """Performs a single optimization step.        Arguments:            closure (callable, optional): A closure that reevaluates the model                and returns the loss.        """        loss = None        if closure is not None:            loss = closure()        for group in self.param_groups:            for p in group['params']:                if p.grad is None:                    continue                grad = p.grad.data                if grad.is_sparse:                    raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')                state = self.state[p]                # State initialization                if len(state) == 0:                    state['step'] = 0                    # Exponential moving average of gradient values                    state['exp_avg'] = torch.zeros_like(p.data)                    # Exponential moving average of squared gradient values                    state['exp_avg_sq'] = torch.zeros_like(p.data)                exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']                beta1, beta2 = group['betas']                state['step'] += 1                if group['weight_decay'] != 0:                    grad = grad.add(group['weight_decay'], p.data)                # Decay the first and second moment running average coefficient                exp_avg.mul_(beta1).add_(1 - beta1, grad)                exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)                denom = exp_avg_sq.sqrt().add_(group['eps'])                bias_correction1 = 1 - beta1 ** state['step']                bias_correction2 = 1 - beta2 ** state['step']                step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1                p.data.addcdiv_(-step_size, exp_avg, denom)        return loss
    源代码的说明。继承父类Optimizer。__init__方法为默认初始化,可见这里说明了如何使用Adam。其中,defaults将参数打包了,params为需要优化的参数列表/矩阵。
原创粉丝点击