coursera deep learning course4 week2

来源:互联网 发布:长沙网络推广外包 编辑:程序博客网 时间:2024/05/18 20:51
  1. ResNets
    这里写图片描述
    The identity block
    这里写图片描述
    The convolutional block ( You can use this type of block when the input and output dimensions don’t match up. The CONV2D layer in the shortcut path is used to resize the input xx to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix WsWs discussed in lecture.) For example, to reduce the activation dimensions’s height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step. )
    这里写图片描述

  2. 1X1 conv&inception network motivation
    让各种规模的卷积核和池化层处理完的volume叠加在一起。
    这里写图片描述
    用1X1 CONV可以减少计算量
    这里写图片描述
    这里写图片描述

  3. transfer learning
    和前面讲的差不多如果数据很少,可以只替换最后的层。数据量多可以将前面的也进行训练。框架中有冻结前面层参数的设置。

  4. Data augmentation
    1). mirroring
    2). random cropping
    3). rotation
    4). shearing
    5). local warping
    6). color shifting
    7). PCA

  5. Data vs. hand-engineering
    Two sources of knowledge从这两方面可以提高模型的效果,越是数据少的领域越需要手动设计,例如特征、结构、算法等。
    这里写图片描述

阅读全文
0 0
原创粉丝点击