How does deep residual learning work?

来源:互联网 发布:黑客数据交易平台 编辑:程序博客网 时间:2024/06/06 07:40

Deep Residual Learning network is a very intriguing network that was developed by researchers from Microsoft Research. The results are quite impressive in that it received first place in ILSVRC 2015 image classification. The network that they used had 152 layers, an impressive 8 times deeper than a comparable VGG network. This is a snapshot from the paper: http://arxiv.org/pdf/1512.03385v... comparing their network with a similarly constructed VGG Convolution Network:

The claim however by Jurgen Schmidhuber is that it is the same thing as an LSTM without gates. (see: Microsoft Wins ImageNet 2015 through Feedforward LSTM without Gates ). Which does seem accurate if you take a look at what an LSTM node looks like:

In other words the inputs of a lower layer is made available to a node in a higher layer. The difference of course is that the Microsoft Residual Network when applied to image classification tasks employs convolution processing layers in its construction. Schmidhuber's research group has published results of "Highway Networks": http://arxiv.org/pdf/1507.06228v... with depths up to 100 layers.

However despite the similarities between LSTM and Highway Networks with Residual Network, the results are quite impressive in that it shows state-of-the-art results for a very deep neural network of 152 layers. A recent paper from Weizmann Institute of Science http://arxiv.org/pdf/1512.03965.... has a mathematical proof that reveals the utility of having deeper networks than that of wider networks. The implication of these three results are that future Deep Learning progress will lead to the development of even deeper networks.

Google's GoogleNet has 22 layers, this was published in late 2014. Two generations later, Google mentioned its Inception 7 network that had 50+ layers.

In all the Residual, Highway and Inception networks, you will notice that the same inputs do travel through paths and different number of layers.

The trend is pretty clear. Not only are Deeper Neural Networks more accurate, they in addition require less weights.

Update: Two recent papers have shown (1) Residual Nets being equivalent to RNN and (2) Residuals Nets acting more like ensembles across several layers.


原文地址: https://www.quora.com/How-does-deep-residual-learning-work

0 0
原创粉丝点击