[torch]nn内部函数?

来源:互联网 发布:java要学多久能学会 编辑:程序博客网 时间:2024/05/01 09:20

1. functions

https://bigaidream.gitbooks.io/subsets_ml_cookbook/content/dl/lua/lua_module.html

[output] forward(input)

Takes an input object, and computes the corresponding output of the module.

After a forward(), the output state variable should have been updated to the new state.

We do NOT override this function. Instead, we implement updateOutput(input)function. The forward module in the abstract parent class module will call updateOutput(input).

[gradInput] backward(input, gradOutput)
Performs a backpropagation step through the module, w.r.t. the given input.

A backpropagation step consists of computing two kind of gradients at input given gradOutput (gradients w.r.t. the output of the module). This function simply performs this task using two function calls:

a function call to updateGradInput(input, gradOutput)a function call to accGradParameters(input, gradOutput)

We do NOT override this function call. We override updateGradInput(input, gradOutput) and accGradParameters(input, gradOutput) functions.

[output] updateOutput(input, gradOutput)
When defining a new module, this method should be overloaded.

Computes the output using the current parameter set of the class and input. This function returns the result which is stored in the output field.

[gradInput] updateGradInput(input, gradOutput)
When defining a new module, this method should be overloaded.

Computes the gradient of the module w.r.t. its own input. This is returned in gradInput. Also, the gradInput state variable is updated accordingly.

[gradInput] accGradParameters(input, gradOutput)
When defining a new module, this method should be overloaded, if the module has trainable parameters.

Computes the gradient of the module w.r.t. its own parameters. Many modules do NOT perform this step as they do NOT have any trainable parameters. The module is expected to accumulate the gradients w.r.t. the trainable parameters in some variables.

Zeroing this accumulation is achieved with zeroGradParameters() and updating the trainable parameters according to this accumulation is done with updateParameters().

practice

https://github.com/apsvvfb/VQA_jan

--train.luaword_feat, img_feat, w_ques, w_img, mask = unpack(protos.word:forward({data.questions, new_data_images}))dummy = protos.word:backward({data.questions,data.images}, {d_conv_feat, d_w_ques, d_w_img, d_conv_img, d_ques_img})
--misc/word_level.luafunction layer:updateOutput(input)  local seq = input[1]  local img = input[2]  ...  return {self.embed_output, self.img_feat, w_embed_ques, w_embed_img, self.mask}function layer:updateGradInput(input, gradOutput)  local seq = input[1]  local img = input[2]  ...  return self.gradInputend
0 0