DL学习笔记【20】nn包中的各位Simple layers
来源:互联网 发布:c语言写网络爬虫 编辑:程序博客网 时间:2024/05/29 09:51
y = Ax + b
x = torch.Tensor({ {1, 0.1}, {2, 0.3}, {10, 0.3}, {31, 0.2} }) print(x) 1.0000 0.1000 2.0000 0.3000 10.0000 0.3000 31.0000 0.2000第一个参数为位置,第二个参数为该位置的数值
\forall k: y_k = x_1 A_k x_2 + b例子代码如下:
input = {torch.randn(128, 10), torch.randn(128, 5)} -- 128 input examples module:forward(input)
module = nn.PartialLinear(5, 3) -- 5 inputs, 3 outputsmodule:setPartition(torch.Tensor({2,4})) -- only compute the 2nd and 4th indices out of a total of 5 indices
y_j = || w_j - x ||权重和输入为什么可以相减?
y_j = || c_j * (w_j - x) ||
y_j = (x · w_j) / ( || w_j || * || x || )
pred_mlp = nn.Sequential() -- A network that makes predictions given x.pred_mlp:add(nn.Linear(5, 4))pred_mlp:add(nn.Linear(4, 3))xy_mlp = nn.ParallelTable() -- A network for predictions and for keeping thexy_mlp:add(pred_mlp) -- true label for comparison with a criterionxy_mlp:add(nn.Identity()) -- by forwarding both x and y through the network.mlp = nn.Sequential() -- The main network that takes both x and y.mlp:add(xy_mlp) -- It feeds x and y to parallel networks;cr = nn.MSECriterion()cr_wrap = nn.CriterionTable(cr)mlp:add(cr_wrap) -- and then applies the criterion.for i = 1, 100 do -- Do a few training iterations x = torch.ones(5) -- Make input features. y = torch.Tensor(3) y:copy(x:narrow(1,1,3)) -- Make output label. err = mlp:forward{x,y} -- Forward both input and output. print(err) -- Print error from criterion. mlp:zeroGradParameters() -- Do backprop... mlp:backward({x, y}) mlp:updateParameters(0.05)end
Modules that adapt basic Tensor methods :
Copy :a copy of the input with type casting ; 看解释好像是把input复制到output中,不太了解用处嗯。。output和input的值一样多么?
Narrow : a narrow operation over a given dimension ;
Replicate : repeats input n times along its first dimension ;
Reshape : a reshape of the inputs ;
View : a view of the inputs ;
Contiguous : contiguous of the inputs ;
Select : a select over a given dimension ;
MaskedSelect : a masked select module performs the torch.maskedSelect operation ;
Index : a index over a given dimension ;
Squeeze : squeezes the input;
Unsqueeze : unsqueeze the input, i.e., insert singleton dimension;
Transpose : transposes the input ;
Modules that adapt mathematical Tensor methods :
AddConstant : adding a constant ;
MulConstant : multiplying a constant ;
Max : a max operation over a given dimension ;
Min : a min operation over a given dimension ;
Mean : a mean operation over a given dimension ;
Sum : a sum operation over a given dimension ;
Exp : an element-wise exp operation ;
Log : an element-wise log operation ;
Abs : an element-wise abs operation ;
Power : an element-wise pow operation ;
Square : an element-wise square operation ;
Sqrt : an element-wise sqrt operation ;
Clamp : an element-wise clamp operation ;
Normalize : normalizes the input to have unit L_p norm ;
MM : matrix-matrix multiplication (also supports batches of matrices) ;
Miscellaneous Modules :
BatchNormalization : mean/std normalization over the mini-batch inputs (with an optional affine transform) ;
PixelShuffle : Rearranges elements in a tensor of shape [C*r, H, W] to a tensor of shape [C, H*r, W*r] ;
Identity : forward input as-is to output (useful with ParallelTable) ;
Dropout : masks parts of the input using binary samples from a bernoulli distribution ;
SpatialDropout : same as Dropout but for spatial inputs where adjacent pixels are strongly correlated ;
VolumetricDropout : same as Dropout but for volumetric inputs where adjacent voxels are strongly correlated ;
Padding : adds padding to a dimension ;
L1Penalty : adds an L1 penalty to an input (for sparsity) ;
GradientReversal : reverses the gradient (to maximize an objective function) ;
GPU : decorates a module so that it can be executed on a specific GPU device.
TemporalDynamicKMaxPooling : selects the k highest values in a sequence. k can be calculated based on sequence length ;
- DL学习笔记【20】nn包中的各位Simple layers
- DL学习笔记【17】nn包中的各位Convolutional layers
- DL学习笔记【18】nn包中的各位Criterions
- DL学习笔记【19】nn包中的各位Modules
- tensorflow学习:tf.nn.conv2d 和 tf.layers.conv2d
- NN学习笔记
- tensorflow学习——tf.layers.batch_normalization/tf.nn.batch_normalization/tf.contrib.layers.batch_norm
- NN and DL
- caffe学习笔记tutorial:Layers
- dl中的一些概念学习
- TensorFlow 0.12 Estimators Models Layers学习笔记
- 【caffe学习笔记】Data Layers 数据层
- 【caffe学习笔记】Common Layers 普通层
- Tensorflow学习笔记(用哪学哪)tf.nn.dropout
- Pytorch入门学习(三)---- NN包的使用
- Ucinet学习笔记一:DL Input
- DL | MXNet 学习笔记(一)
- Caffe 学习笔记(视觉层(Vision Layers)及参数)
- 任务二:零基础HTML及CSS编码(一)
- 三月英语——偷学小技巧
- c3p0数据库连接池老是报错,怒换dpcp连接池
- SQL经典教程 第二篇:高级(二)--约束
- C
- DL学习笔记【20】nn包中的各位Simple layers
- Nginx优化
- 开心的金明-动态规划-洛谷
- 客官留步!从Visual Studio看微软20年技术变迁
- BZOJ P3156: 防御准备
- 【Android实战】----基于Retrofit实现多图片/文件、图文上传
- Linux小知识
- java中的udp
- java中的Iterator和Iterable 区别