[深度学习论文笔记][Visualizing] Visualizing and Understanding Convolutional Networks
来源:互联网 发布:vr合成软件 编辑:程序博客网 时间:2024/05/22 02:25
Zeiler, Matthew D., and Rob Fergus. “Visualizing and understanding convolutional networks.” European Conference on Computer Vision. Springer International Publishing, 2014.(Citations: 1207).
Idea Occlude portions of the input image, revealing which parts of the scene are important for classification.
Method Occlude different portions of the input image with a grey square, and monitor the probability output of correct class of the classifier, plot as a function of the position of the grey square in the original image.
Deconv Approach
DeconvNetFor the relu layer
The backward pass is
Method For each layer, random select a subset of feature maps. For each feature map, find the top 9 neurons that have the highest activations. Projecting each separately down to pixel space by deconvnet reveals the different structures that excite the a given feature map.
Result Can be seen in Fig. 4.2, 4.3, 4.4. Alongside these visualizations we show the corresponding image patches.
• The the strong grouping within each feature map.
• Hierarchical nature of the features in the network (layer 2: corners and other edge/color conjunctions; layer 3: textures, mesh patterns (r1, c1), and text (r2, c4); layer 4: more class-specific, like dog faces (r1, c1) and bird’s legs (r4, c2); layer 5: entire objects, like keyboards (r1, c11) and dogs (r4)).
• Greater invariance at higher layers.
• Exaggeration of discriminative parts of the image, e.g. eyes and noses of dogs (layer 4, r1, c1).
Feature Evolution During Training The lower layers of the model can be seen to converge within a few epochs. However, the upper layers only develop after a considerable number of epochs (40-50), demonstrating the need to let the models train until fully converged.
Feature Invariance Small transformations have a dramatic effect in the first layer of the model, but a lesser impact at the top feature layer, being quasi-linear for translation and scaling. However, the output is not invariant to rotation.
References
[1]. M. Zeiler. https://www.youtube.com/watch?v=ghEmQSxT6tw.
[2]. F.-F. Li, A. Karpathy, and J. Johnson. http://cs231n.stanford.edu/slides/winter1516_lecture9.pdf.
- [深度学习论文笔记][Visualizing] Visualizing and Understanding Convolutional Networks
- 论文笔记 Visualizing and understanding convolutional networks
- Visualizing and Understanding Convolutional Networks论文笔记
- [深度学习]Visualizing and Understanding Convolutional Networks阅读笔记
- 论文提要“Visualizing and Understanding Convolutional Networks”
- 论文Visualizing and Understanding Convolutional Networks
- 深度学习研究理解5:Visualizing and Understanding Convolutional Networks
- Visualizing and Understanding Convolutional Networks笔记
- Visualizing and Understanding Convolutional Networks笔记
- Visualizing and Understanding Convolutional Networks阅读笔记
- Visualizing and Understanding Convolutional Networks笔记1
- Visualizing and Understanding Convolutional Networks笔记2
- Visualizing and Understanding convolutional networks
- Visualizing and Understanding Convolutional Networks
- Visualizing and Understanding Convolutional Networks
- Visualizing and Understanding Convolutional Networks
- Visualizing and Understanding Convolutional Networks
- Visualizing and Understanding Convolutional Networks
- 求各位大神,这种乱码怎么解决?是安卓项目的!!急急急!!
- 各种面试最基本网络问题
- Python ''.JOIN()的作用
- Unity3D - 控制角色移动
- JAVA语言跨平台原理
- [深度学习论文笔记][Visualizing] Visualizing and Understanding Convolutional Networks
- 数据分析学习
- 【洛谷 1855】 榨取kkksc03
- [转载]EasyCamera中海康摄像头语音对讲和云台控制转发实现
- !!!系统架构好文!!!-天猫双11晚会和狂欢城的互动技术方案
- gralde执行遇到Unsupported major.minor version 52.0错误
- php函数
- nginx增加GeoIP模块
- Java(Android)线程池