[深度学习论文笔记][Adversarial Examples] Intriguing properties of neural networks
来源:互联网 发布:新版手机知乎怎么提问 编辑:程序博客网 时间:2024/09/21 08:58
1 Representation of High Level Neurons
1.1 Motivation
implicit assumption that the neurons in high level layers form a distinguished basis which is particularly useful for extracting semantic information.
We found that random linear combinations of activations are semantically indistinguishable from the activations themselves. This puts into question the notion that neural networks disentangle variation factors across activations. It suggests that it is the space, rather than the individual neurons, that contains the semantic information in the high layers of neural networks.
2.1 Motivation
By adding imperceptibly small perturbations to a correctly classified input image, it is no longer classified correctly.
2.2 Optimization
For a given image X, we add small perturbation ε ∈ R D×H×W such that X + ε is the closest image to X be wrongly classified as class k.
This is a box-constrained optimization.
Approximate it by box-constrained L-BFGS
2.3 Results
See Fig. 10.1. Adversarial examples show that inputs in the vicinity of training set images could have unexpected classification labels assigned.
Cross model generalization: a relatively large fraction of examples will be misclassified by networks trained from scratch with different hyper-parameters (number of layers,
regularization or initial weights).
Cross training-set generalization: a relatively large fraction of examples will be misclassified by networks trained from scratch on a disjoint training set.
It suggest that adversarial examples are somewhat universal and not just the results of overfitting to a particular model or to the specific selection of the training set.
The set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the rational numbers), and so it is found near every virtually every test case.
- [深度学习论文笔记][Adversarial Examples] Intriguing properties of neural networks
- [深度学习论文笔记][Adversarial Examples] Deep Neural Networks are Easily Fooled: High Confidence Predictions
- [深度学习论文笔记][Adversarial Examples] Explaining and Harnessing Adversarial Examples
- [深度学习论文笔记][Weight Initialization] Data-dependent Initializations of Convolutional Neural Networks
- 【深度学习论文笔记】Deep Neural Networks for Object Detection
- [深度学习论文笔记][arxiv 1711] Non-local Neural Networks
- [深度学习论文笔记][Neural Arts] Inceptionism: Going Deeper into Neural Networks
- [深度学习论文笔记][Recurrent Neural Networks] Visualizing and Understanding Recurrent Networks
- [深度学习论文笔记][Image Classification] ImageNet Classification with Deep Convolutional Neural Networks
- [深度学习论文笔记][Visualizing] Understanding Neural Networks Through Deep Visualization
- [深度学习论文笔记][Human Pose Estimation] DeepPose: Human Pose Estimation via Deep Neural Networks
- [深度学习论文笔记][Semantic Segmentation] Recurrent Convolutional Neural Networks for Scene Labeling
- [深度学习论文笔记][Video Classification] Large-scale Video Classification with Convolutional Neural Networks
- 【深度学习论文笔记:Recognition】:Deep Neural Networks for Object Detection
- [深度学习论文笔记][Neural Arts] A Neural Algorithm of Artistic Style
- deeplearning论文学习笔记(2)A critical review of recurrent neural networks for sequence learning
- [semantic segmentation] using Adversarial Networks 论文学习
- 深度学习笔记 (一) 卷积神经网络基础 (Foundation of Convolutional Neural Networks)
- 第十周项目4—判断二叉树的相似
- Android 显示全文折叠控件
- sql 事务
- Android学习笔记(四)碎片(Fragment)
- js运算符单竖杠“|”与“||”的用法和作用介绍_优就业
- [深度学习论文笔记][Adversarial Examples] Intriguing properties of neural networks
- 2016.7.12 NOIP2013提高组day1解题报告(未完成版)
- Processon.com 只能保存到本地而无法保存到服务器的问题;
- URL与URI的区别
- sql查询将一对多转化为一对一
- 2016.7.12 NOIP2013提高组 day2解题报告(未完成版)
- 欢迎使用CSDN-markdown编辑器
- 2016.7.9 线段树(未完成版)
- 2016.7.13 树形动规