Exploring Sparsity in Recurrent Neural Networks

来源:互联网 发布:剑雨江湖10进阶数据 编辑:程序博客网 时间:2024/05/20 07:17

Exploring Sparsity in Recurrent Neural Networks

Sharan Narang, Gregory Diamos, Shubho Sengupta, Erich Elsen
Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to evaluate it. In order to deploy these RNNs efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8x and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90% and speed-up is around 2x to 7x.
Comments:Published as a conference paper at ICLR 2017Subjects:Learning (cs.LG); Computation and Language (cs.CL)Cite as:arXiv:1704.05119 [cs.LG] (or arXiv:1704.05119v1 [cs.LG] for this version)

Submission history

From: Sharan Narang [view email] 
[v1] Mon, 17 Apr 2017 20:42:05 GMT (259kb,D)
原创粉丝点击