Semantic Compositional Networks for Visual Captioning

来源:互联网 发布:centos ftp客户端 编辑:程序博客网 时间:2024/05/17 02:17

Semantic Compositional Networks for Visual Captioning

Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, Li Deng
A Semantic Compositional Network (SCN) is developed for image captioning, in which semantic concepts (i.e., tags) are detected from the image, and the probability of each tag is used to compose the parameters in a long short-term memory (LSTM) network. The SCN extends each weight matrix of the LSTM to an ensemble of tag-dependent weight matrices. The degree to which each member of the ensemble is used to generate an image caption is tied to the image-dependent probability of the corresponding tag. In addition to captioning images, we also extend the SCN to generate captions for video clips. We qualitatively analyze semantic composition in SCNs, and quantitatively evaluate the algorithm on three benchmark datasets: COCO, Flickr30k, and Youtube2Text. Experimental results show that the proposed method significantly outperforms prior state-of-the-art approaches, across multiple evaluation metrics.
Comments:Accepted in CVPR 2017Subjects:Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Learning (cs.LG)Cite as:arXiv:1611.08002 [cs.CV] (or arXiv:1611.08002v2 [cs.CV] for this version)

Submission history

From: Zhe Gan [view email] 
[v1] Wed, 23 Nov 2016 21:22:22 GMT (1235kb,D)
[v2] Tue, 28 Mar 2017 18:33:51 GMT (2051kb,D)
阅读全文
0 0
原创粉丝点击