Article ID Journal Published Year Pages File Type
11012485 Neurocomputing 2018 16 Pages PDF
Abstract
Image captioning aims to describe the content of images with a sentence. It is a natural way for people to express their understanding, but a challenging and important task from the view of image understanding. In this paper, we propose two innovations to improve the performance of such a sequence learning problem. First, we give a new attention method named triple attention (TA-LSTM) which can leverage the image context information at every stage of LSTM. Then, we redesign the structure of basic LSTM, in which not only the stacked LSTM but also the paralleled LSTM are adopted, called as PS-LSTM. In this structure, we not only use the stack LSTM but also use the parallel LSTM to achieve the improvement of the performance compared with the normal LSTM. Through this structure, the proposed model can ensemble more parameters on single model and has ensemble ability itself. Through numerical experiments, on the public available MSCOCO dataset, our final TA-PS-LSTM model achieves comparable performance with some state-of-the-art methods.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , , , ,