Pytorch入门学习(三)---- NN包的使用
来源:互联网 发布:淘宝模板代码怎么使用 编辑:程序博客网 时间:2024/04/29 20:53
pytorch和torch的对比。pytorch将所有的Container都用autograd替代。这样的话根本不需要用ConcatTable
,CAddTable
之类的。直接用符号运算就行了。 output = nn.CAddTable():forward({input1, input2})
直接用output = input1 + input2
就行。真简单。
从下图看出,pytorch的网络模块只有.weight
和.bias
。而那些梯度.gradInput
和.output
都被消除。
例子:
import torchfrom torch.autograd import Variableimport torch.nn as nnimport torch.nn.functional as Fclass MNISTConvNet(nn.Module): def __init__(self): # this is the place where you instantiate all your modules # you can later access them using the same names you've given them in # here super(MNISTConvNet, self).__init__() self.conv1 = nn.Conv2d(1, 10, 5) self.pool1 = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(10, 20, 5) self.pool2 = nn.MaxPool2d(2, 2) self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) # it's the forward function that defines the network structure # we're accepting only a single input in here, but if you want, # feel free to use more def forward(self, input): x = self.pool1(F.relu(self.conv1(input))) x = self.pool2(F.relu(self.conv2(x)))#看到下面,深深震撼。。这灵活度太大了吧。 # in your model definition you can go full crazy and use arbitrary # python code to define your model structure # all these are perfectly legal, and will be handled correctly # by autograd: # if x.gt(0) > x.numel() / 2: # ... # # you can even do a loop and reuse the same module inside it # modules no longer hold ephemeral state, so you can use them # multiple times during your forward pass # while x.norm(2) < 10: # x = self.conv1(x) x = x.view(x.size(0), -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) return x
注意:nn.Conv2d
输入必须是4D的。就是nSamples x nChannels x Height x Width
,如果只有单个数据,那么可以在最前面增加一维input.unsqueeze(0)
查看某一层的权值信息
print(net.conv1.weight.grad)print(net.conv1.weight.data.norm()) # norm of the weightprint(net.conv1.weight.grad.data.norm()) # norm of the gradients
查看网络某一层的output和grad_output
前面以前看了如何查看某一层的weight以及grad。如果要查看某一层的output和grad_output,则需要用hook。
前向时hook
def printnorm(self, input, output): # input is a tuple of packed inputs # output is a Variable. output.data is the Tensor we are interested print('Inside ' + self.__class__.__name__ + ' forward') print('') print('input: ', type(input)) print('input[0]: ', type(input[0])) print('output: ', type(output)) print('') print('input size:', input[0].size()) print('output size:', output.data.size()) print('output norm:', output.data.norm())net.conv2.register_forward_hook(printnorm)out = net(input)
反向时hook
def printgradnorm(self, grad_input, grad_output): print('Inside ' + self.__class__.__name__ + ' backward') print('Inside class:' + self.__class__.__name__) print('') print('grad_input: ', type(grad_input)) print('grad_input[0]: ', type(grad_input[0])) print('grad_output: ', type(grad_output)) print('grad_output[0]: ', type(grad_output[0])) print('') print('grad_input size:', grad_input[0].size()) print('grad_output size:', grad_output[0].size()) print('grad_input norm:', grad_input[0].data.norm())net.conv2.register_backward_hook(printgradnorm)out = net(input)err = loss_fn(out, target)err.backward()
1 0
- Pytorch入门学习(三)---- NN包的使用
- Pytorch入门学习(五)---- 示例讲解Tensor, Autograd, nn.module
- Pytorch入门学习(四)---- 多GPU的使用
- PyTorch入门学习(一)
- Pytorch学习笔记(三)
- Pytorch学习入门(一)--- 从torch7跳坑至pytorch --- Tensor
- Pytorch学习入门(二)--- Autograd
- pytorch学习笔记(三):自动求导
- pytorch使用(三)网络结构可视化
- PyTorch学习总结(五)——torch.nn
- pytorch-class nn.Module
- Pytorch学习笔记(一):pytorch的安装-Ubuntu14.04
- Hadoop—NN-学习三
- pytorch学习笔记(七):pytorch hook 和 关于pytorch backward过程的理解
- pytorch学习笔记(七):pytorch hook 和 关于pytorch backward过程的理解
- PyTorch入门(2)
- PyTorch笔记4-快速构建神经网络(NN)
- pytorch入门(3)pytorch-seq2seq模型
- 28. Implement strStr()
- 决策树算法优化(三)
- python
- TypeError: unsupported operand type(s) for +: 'int' and 'str'
- 剑指offer——旋转数组的最小数字______
- Pytorch入门学习(三)---- NN包的使用
- java入门之超入门
- path-sum-ii
- 初步搭建Unity SteamVR开发环境
- 将Eclipse代码导入到AndroidStudio的两种方式
- 3w1h
- 02-线性结构4 Pop Sequence (25分)
- des加解密(JavaScript&Java)
- 遮罩层的实现及应用