[torch]nn内部函数?
来源:互联网 发布:java要学多久能学会 编辑:程序博客网 时间:2024/05/01 09:20
1. functions
https://bigaidream.gitbooks.io/subsets_ml_cookbook/content/dl/lua/lua_module.html
[output] forward(input)
Takes an input object, and computes the corresponding output of the module.
After a forward(), the output state variable should have been updated to the new state.
We do NOT override this function. Instead, we implement updateOutput(input)
function. The forward module in the abstract parent class module will call updateOutput(input).
[gradInput] backward(input, gradOutput)
Performs a backpropagation step through the module, w.r.t. the given input.
A backpropagation step consists of computing two kind of gradients at input given gradOutput (gradients w.r.t. the output of the module). This function simply performs this task using two function calls:
a function call to updateGradInput(input, gradOutput)a function call to accGradParameters(input, gradOutput)
We do NOT override this function call. We override updateGradInput(input, gradOutput) and accGradParameters(input, gradOutput) functions.
[output] updateOutput(input, gradOutput)
When defining a new module, this method should be overloaded.
Computes the output using the current parameter set of the class and input. This function returns the result which is stored in the output field.
[gradInput] updateGradInput(input, gradOutput)
When defining a new module, this method should be overloaded.
Computes the gradient of the module w.r.t. its own input. This is returned in gradInput. Also, the gradInput state variable is updated accordingly.
[gradInput] accGradParameters(input, gradOutput)
When defining a new module, this method should be overloaded, if the module has trainable parameters.
Computes the gradient of the module w.r.t. its own parameters. Many modules do NOT perform this step as they do NOT have any trainable parameters. The module is expected to accumulate the gradients w.r.t. the trainable parameters in some variables.
Zeroing this accumulation is achieved with zeroGradParameters()
and updating the trainable parameters according to this accumulation is done with updateParameters()
.
practice
https://github.com/apsvvfb/VQA_jan
--train.luaword_feat, img_feat, w_ques, w_img, mask = unpack(protos.word:forward({data.questions, new_data_images}))dummy = protos.word:backward({data.questions,data.images}, {d_conv_feat, d_w_ques, d_w_img, d_conv_img, d_ques_img})
--misc/word_level.luafunction layer:updateOutput(input) local seq = input[1] local img = input[2] ... return {self.embed_output, self.img_feat, w_embed_ques, w_embed_img, self.mask}function layer:updateGradInput(input, gradOutput) local seq = input[1] local img = input[2] ... return self.gradInputend
- [torch]nn内部函数?
- torch.nn
- torch.nn.utils(nn/utils/)
- Torch nn.MM 实例
- torch.nn.Parameter(nn/parameter.py)
- torch.nn.init(nn/init.py)
- Torch-nn学习:Tabel Layer
- Torch-nn学习: Simpley Layer
- torch系列:torch中的nn.Sequential,nn.Concat/ConcatTable,nn.Parallel/PararelTable之间区别
- Torch中nn.inception的参数设置
- lua,torch,nn模块入门笔记
- torch.nn 小坑和疑惑
- torch入门笔记15:nn package详解
- torch/nn目录结构以及__init__.py
- PyTorch(1) torch.nn与torch.nn.functional之间的区别和联系
- torch学习(二) nn类结构-Module
- torch学习笔记(一) nn类结构-Module
- torch学习笔记(二) nn类结构-Linear
- mysql 整理之mysql MMM
- PHP7中需要记住的细节
- 进制
- Android设置EditText输入字数限制的两种方法!
- 二进制转十进制
- [torch]nn内部函数?
- std::function
- MLaPP Chapter 2 Probability 概率论
- 怎么使用筛选法求素数
- hdu 2007 平方和与立方和
- eclipse_tomcat操作技巧
- 计算机学习之路
- Android面试题(一)
- 23.标记与巨幕