Cargando…
Semantic guidance network for video captioning
video captioning is a more challenging task that aims to generate abundant natural language descriptions, and it has become a promising direction for artificial intelligence. However, most existing methods are prone to ignore the problems of visual information redundancy and scene information omissi...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10522692/ https://www.ncbi.nlm.nih.gov/pubmed/37752267 http://dx.doi.org/10.1038/s41598-023-43010-3 |
_version_ | 1785110407244939264 |
---|---|
author | Guo, Lan Zhao, Hong Chen, ZhiWen Han, ZeYu |
author_facet | Guo, Lan Zhao, Hong Chen, ZhiWen Han, ZeYu |
author_sort | Guo, Lan |
collection | PubMed |
description | video captioning is a more challenging task that aims to generate abundant natural language descriptions, and it has become a promising direction for artificial intelligence. However, most existing methods are prone to ignore the problems of visual information redundancy and scene information omission due to the limitation of the sampling strategies. To address this problem, a semantic guidance network for video captioning is proposed. More specifically, a novel scene frame sampling strategy is first proposed to select key scene frames. Then, the vision transformer encoder is applied to learn visual and semantic information with a global view, alleviating information loss of modeling long-range dependencies caused in the encoder’s hidden layer. Finally, a non-parametric metric learning module is introduced to calculate the similarity value between the ground truth and the prediction result, and the model is optimized in an end-to-end manner. Experiments on the benchmark MSR-VTT and MSVD datasets show that the proposed method can effectively improve the description accuracy and generalization ability. |
format | Online Article Text |
id | pubmed-10522692 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-105226922023-09-28 Semantic guidance network for video captioning Guo, Lan Zhao, Hong Chen, ZhiWen Han, ZeYu Sci Rep Article video captioning is a more challenging task that aims to generate abundant natural language descriptions, and it has become a promising direction for artificial intelligence. However, most existing methods are prone to ignore the problems of visual information redundancy and scene information omission due to the limitation of the sampling strategies. To address this problem, a semantic guidance network for video captioning is proposed. More specifically, a novel scene frame sampling strategy is first proposed to select key scene frames. Then, the vision transformer encoder is applied to learn visual and semantic information with a global view, alleviating information loss of modeling long-range dependencies caused in the encoder’s hidden layer. Finally, a non-parametric metric learning module is introduced to calculate the similarity value between the ground truth and the prediction result, and the model is optimized in an end-to-end manner. Experiments on the benchmark MSR-VTT and MSVD datasets show that the proposed method can effectively improve the description accuracy and generalization ability. Nature Publishing Group UK 2023-09-26 /pmc/articles/PMC10522692/ /pubmed/37752267 http://dx.doi.org/10.1038/s41598-023-43010-3 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Guo, Lan Zhao, Hong Chen, ZhiWen Han, ZeYu Semantic guidance network for video captioning |
title | Semantic guidance network for video captioning |
title_full | Semantic guidance network for video captioning |
title_fullStr | Semantic guidance network for video captioning |
title_full_unstemmed | Semantic guidance network for video captioning |
title_short | Semantic guidance network for video captioning |
title_sort | semantic guidance network for video captioning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10522692/ https://www.ncbi.nlm.nih.gov/pubmed/37752267 http://dx.doi.org/10.1038/s41598-023-43010-3 |
work_keys_str_mv | AT guolan semanticguidancenetworkforvideocaptioning AT zhaohong semanticguidancenetworkforvideocaptioning AT chenzhiwen semanticguidancenetworkforvideocaptioning AT hanzeyu semanticguidancenetworkforvideocaptioning |