Pytorch从入门到精通(一):线性模型

来源:互联网 发布:淘宝客服不理人 编辑:程序博客网 时间:2024/06/07 06:00

我们先来看一个问题,然后看人工智能如何计算出最后的答案。

问题很简单:一个人学习时长(单位:小时)和他成绩的对应关系如下,求出他在学习四小时后的成绩。

其实这个问题一个5岁小孩都能一眼看出来,但是如何让人工智能计算出来呢。

我们借助Python的numpy包,然后用梯度下降法计算出结果:

import numpy as npx_data = [1.0, 2.0, 3.0]y_data = [2.0, 4.0, 6.0]w = 1.0  # a random guess: random value# our model forward passdef forward(x):    return x * w# Loss functiondef loss(x, y):    y_pred = forward(x)    return (y_pred - y) * (y_pred - y)# compute gradientdef gradient(x, y):  # d_loss/d_w    return 2 * x * (x * w - y)# Before trainingprint("predict (before training)",  4, forward(4))# Training loopfor epoch in range(10):    for x_val, y_val in zip(x_data, y_data):        grad = gradient(x_val, y_val)        w = w - 0.01 * grad        print("\tgrad: ", x_val, y_val, round(grad, 2))        l = loss(x_val, y_val)    print("progress:", epoch, "w=", round(w, 2), "loss=", round(l, 2))# After trainingprint("predict (after training)",  "4 hours", forward(4))

再使用pytorch,来实现一样的功能:

import torchfrom torch.autograd import Variablex_data = Variable(torch.Tensor([[1.0], [2.0], [3.0]]))y_data = Variable(torch.Tensor([[2.0], [4.0], [6.0]]))class Model(torch.nn.Module):    def __init__(self):        """        In the constructor we instantiate two nn.Linear module        """        super(Model, self).__init__()        self.linear = torch.nn.Linear(1, 1)  # One in and one out    def forward(self, x):        """        In the forward function we accept a Variable of input data and we must return        a Variable of output data. We can use Modules defined in the constructor as        well as arbitrary operators on Variables.        """        y_pred = self.linear(x)        return y_pred# our modelmodel = Model()# Construct our loss function and an Optimizer. The call to model.parameters()# in the SGD constructor will contain the learnable parameters of the two# nn.Linear modules which are members of the model.criterion = torch.nn.MSELoss(size_average=False)optimizer = torch.optim.SGD(model.parameters(), lr=0.01)# Training loopfor epoch in range(500):        # Forward pass: Compute predicted y by passing x to the model    y_pred = model(x_data)    # Compute and print loss    loss = criterion(y_pred, y_data)    print(epoch, loss.data[0])    # Zero gradients, perform a backward pass, and update the weights.    optimizer.zero_grad()    loss.backward()    optimizer.step()# After traininghour_var = Variable(torch.Tensor([[4.0]]))y_pred = model(hour_var)print("predict (after training)",  4, model(hour_var).data[0][0])