Cargando…

An image caption model based on attention mechanism and deep reinforcement learning

Image caption technology aims to convert visual features of images, extracted by computers, into meaningful semantic information. Therefore, the computers can generate text descriptions that resemble human perception, enabling tasks such as image classification, retrieval, and analysis. In recent ye...

Descripción completa

Detalles Bibliográficos
Autores principales: Bai, Tong, Zhou, Sen, Pang, Yu, Luo, Jiasai, Wang, Huiqian, Du, Ya
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10585027/
https://www.ncbi.nlm.nih.gov/pubmed/37869519
http://dx.doi.org/10.3389/fnins.2023.1270850
_version_ 1785122863114616832
author Bai, Tong
Zhou, Sen
Pang, Yu
Luo, Jiasai
Wang, Huiqian
Du, Ya
author_facet Bai, Tong
Zhou, Sen
Pang, Yu
Luo, Jiasai
Wang, Huiqian
Du, Ya
author_sort Bai, Tong
collection PubMed
description Image caption technology aims to convert visual features of images, extracted by computers, into meaningful semantic information. Therefore, the computers can generate text descriptions that resemble human perception, enabling tasks such as image classification, retrieval, and analysis. In recent years, the performance of image caption has been significantly enhanced with the introduction of encoder-decoder architecture in machine translation and the utilization of deep neural networks. However, several challenges still persist in this domain. Therefore, this paper proposes a novel method to address the issue of visual information loss and non-dynamic adjustment of input images during decoding. We introduce a guided decoding network that establishes a connection between the encoding and decoding parts. Through this connection, encoding information can provide guidance to the decoding process, facilitating automatic adjustment of the decoding information. In addition, Dense Convolutional Network (DenseNet) and Multiple Instance Learning (MIL) are adopted in the image encoder, and Nested Long Short-Term Memory (NLSTM) is utilized as the decoder to enhance the extraction and parsing capability of image information during the encoding and decoding process. In order to further improve the performance of our image caption model, this study incorporates an attention mechanism to focus details and constructs a double-layer decoding structure, which facilitates the enhancement of the model in terms of providing more detailed descriptions and enriched semantic information. Furthermore, the Deep Reinforcement Learning (DRL) method is employed to train the model by directly optimizing the identical set of evaluation indexes, which solves the problem of inconsistent training and evaluation standards. Finally, the model is trained and tested on MS COCO and Flickr 30 k datasets, and the results show that the model has improved compared with commonly used models in the evaluation indicators such as BLEU, METEOR and CIDEr.
format Online
Article
Text
id pubmed-10585027
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-105850272023-10-20 An image caption model based on attention mechanism and deep reinforcement learning Bai, Tong Zhou, Sen Pang, Yu Luo, Jiasai Wang, Huiqian Du, Ya Front Neurosci Neuroscience Image caption technology aims to convert visual features of images, extracted by computers, into meaningful semantic information. Therefore, the computers can generate text descriptions that resemble human perception, enabling tasks such as image classification, retrieval, and analysis. In recent years, the performance of image caption has been significantly enhanced with the introduction of encoder-decoder architecture in machine translation and the utilization of deep neural networks. However, several challenges still persist in this domain. Therefore, this paper proposes a novel method to address the issue of visual information loss and non-dynamic adjustment of input images during decoding. We introduce a guided decoding network that establishes a connection between the encoding and decoding parts. Through this connection, encoding information can provide guidance to the decoding process, facilitating automatic adjustment of the decoding information. In addition, Dense Convolutional Network (DenseNet) and Multiple Instance Learning (MIL) are adopted in the image encoder, and Nested Long Short-Term Memory (NLSTM) is utilized as the decoder to enhance the extraction and parsing capability of image information during the encoding and decoding process. In order to further improve the performance of our image caption model, this study incorporates an attention mechanism to focus details and constructs a double-layer decoding structure, which facilitates the enhancement of the model in terms of providing more detailed descriptions and enriched semantic information. Furthermore, the Deep Reinforcement Learning (DRL) method is employed to train the model by directly optimizing the identical set of evaluation indexes, which solves the problem of inconsistent training and evaluation standards. Finally, the model is trained and tested on MS COCO and Flickr 30 k datasets, and the results show that the model has improved compared with commonly used models in the evaluation indicators such as BLEU, METEOR and CIDEr. Frontiers Media S.A. 2023-10-05 /pmc/articles/PMC10585027/ /pubmed/37869519 http://dx.doi.org/10.3389/fnins.2023.1270850 Text en Copyright © 2023 Bai, Zhou, Pang, Luo, Wang and Du. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Bai, Tong
Zhou, Sen
Pang, Yu
Luo, Jiasai
Wang, Huiqian
Du, Ya
An image caption model based on attention mechanism and deep reinforcement learning
title An image caption model based on attention mechanism and deep reinforcement learning
title_full An image caption model based on attention mechanism and deep reinforcement learning
title_fullStr An image caption model based on attention mechanism and deep reinforcement learning
title_full_unstemmed An image caption model based on attention mechanism and deep reinforcement learning
title_short An image caption model based on attention mechanism and deep reinforcement learning
title_sort image caption model based on attention mechanism and deep reinforcement learning
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10585027/
https://www.ncbi.nlm.nih.gov/pubmed/37869519
http://dx.doi.org/10.3389/fnins.2023.1270850
work_keys_str_mv AT baitong animagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT zhousen animagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT pangyu animagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT luojiasai animagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT wanghuiqian animagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT duya animagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT baitong imagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT zhousen imagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT pangyu imagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT luojiasai imagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT wanghuiqian imagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning
AT duya imagecaptionmodelbasedonattentionmechanismanddeepreinforcementlearning