How does deep residual learning work?
来源:互联网 发布:黑客数据交易平台 编辑:程序博客网 时间:2024/06/06 07:40
Deep Residual Learning network is a very intriguing network that was developed by researchers from Microsoft Research. The results are quite impressive in that it received first place in ILSVRC 2015 image classification. The network that they used had 152 layers, an impressive 8 times deeper than a comparable VGG network. This is a snapshot from the paper: http://arxiv.org/pdf/1512.03385v... comparing their network with a similarly constructed VGG Convolution Network:
The claim however by Jurgen Schmidhuber is that it is the same thing as an LSTM without gates. (see: Microsoft Wins ImageNet 2015 through Feedforward LSTM without Gates ). Which does seem accurate if you take a look at what an LSTM node looks like:
In other words the inputs of a lower layer is made available to a node in a higher layer. The difference of course is that the Microsoft Residual Network when applied to image classification tasks employs convolution processing layers in its construction. Schmidhuber's research group has published results of "Highway Networks": http://arxiv.org/pdf/1507.06228v... with depths up to 100 layers.
However despite the similarities between LSTM and Highway Networks with Residual Network, the results are quite impressive in that it shows state-of-the-art results for a very deep neural network of 152 layers. A recent paper from Weizmann Institute of Science http://arxiv.org/pdf/1512.03965.... has a mathematical proof that reveals the utility of having deeper networks than that of wider networks. The implication of these three results are that future Deep Learning progress will lead to the development of even deeper networks.
Google's GoogleNet has 22 layers, this was published in late 2014. Two generations later, Google mentioned its Inception 7 network that had 50+ layers.
In all the Residual, Highway and Inception networks, you will notice that the same inputs do travel through paths and different number of layers.
The trend is pretty clear. Not only are Deeper Neural Networks more accurate, they in addition require less weights.
Update: Two recent papers have shown (1) Residual Nets being equivalent to RNN and (2) Residuals Nets acting more like ensembles across several layers.
原文地址: https://www.quora.com/How-does-deep-residual-learning-work
- How does deep residual learning work?
- Deep Residual Learning 论文解析
- How does netstat work
- How does XVCL work?
- How does JNA work?
- How does cas work
- how does wifi work?
- How does google work
- How does browsersync work?
- How does maven work?
- Deep Residual Learning for Image Recognition
- Deep Residual Learning for Image Recognition
- Deep Residual Learning for Image Recognition 笔记
- Deep Residual Learning for Image Recognition 笔记
- Deep Residual Learning for Image Recognition 笔记
- Deep Residual Learning for Image Recognition
- Deep Residual Learning for Image Recognition笔记
- Deep Residual Learning for Image Recognition
- Jess 7.2p2——Java平台规则引擎官方文档翻译2
- oVirt 技巧汇总
- 安装git
- HashMap源码剖析
- 欢迎使用CSDN-markdown编辑器
- How does deep residual learning work?
- python重温设计模式===>行为型
- spring学习(一)— spring 概念
- 关于Navicat和MYSQL字符集不统一出现的中文乱码问题
- git服务器搭建
- html之file标签 --- 图片上传前预览 -- FileReader
- jenkins定时自动备份数据库
- C++拾趣——类构造函数的隐式转换
- asp数组使用