PyTorch基本用法(二)——Variable

来源:互联网 发布:mackeeper说我mac中毒 编辑:程序博客网 时间:2024/06/06 12:21

客:noahsnail.com  |  CSDN  |  简书

本文主要是PyTorch中Variable变量的一些用法。

import torchfrom torch.autograd import Variabletensor = torch.FloatTensor([[1, 2], [3, 4]])# 定义Variable, requires_grad用来指定是否需要计算梯度variable = Variable(tensor, requires_grad = True)print tensorprint variable
 1  2 3  4[torch.FloatTensor of size 2x2]Variable containing: 1  2 3  4[torch.FloatTensor of size 2x2]
# 计算x^2的均值tensor_mean = torch.mean(tensor * tensor)variable_mean = torch.mean(variable * variable)print tensor_meanprint variable_mean
7.5Variable containing: 7.5000[torch.FloatTensor of size 1]
# variable进行反向传播# 梯度计算如下:# variable_mean = 1/4 * sum(variable * variable)# d(variable_mean)/d(variable) = 1/4 * 2 * variable = 1/2 * variablevariable_mean.backward()# 输出variable中的梯度print variable.grad
Variable containing: 0.5000  1.0000 1.5000  2.0000[torch.FloatTensor of size 2x2]
# *表示逐元素点乘,不是矩阵乘法print tensor * tensorprint variable * variable
  1   4  9  16[torch.FloatTensor of size 2x2]Variable containing:  1   4  9  16[torch.FloatTensor of size 2x2]
# 输出variable中的data, data是tensorprint variable.data
 1  2 3  4[torch.FloatTensor of size 2x2]