Cargando…

UAT: Universal Attention Transformer for Video Captioning

Video captioning via encoder–decoder structures is a successful sentence generation method. In addition, using various feature extraction networks for extracting multiple features to obtain multiple kinds of visual features in the encoding process is a standard method for improving model performance...

Descripción completa

Detalles Bibliográficos
Autores principales: Im, Heeju, Choi, Yong-Suk
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9269373/
https://www.ncbi.nlm.nih.gov/pubmed/35808316
http://dx.doi.org/10.3390/s22134817
_version_ 1784744220221767680
author Im, Heeju
Choi, Yong-Suk
author_facet Im, Heeju
Choi, Yong-Suk
author_sort Im, Heeju
collection PubMed
description Video captioning via encoder–decoder structures is a successful sentence generation method. In addition, using various feature extraction networks for extracting multiple features to obtain multiple kinds of visual features in the encoding process is a standard method for improving model performance. Such feature extraction networks are weight-freezing states and are based on convolution neural networks (CNNs). However, these traditional feature extraction methods have some problems. First, when the feature extraction model is used in conjunction with freezing, additional learning of the feature extraction model is not possible by exploiting the backpropagation of the loss obtained from the video captioning training. Specifically, this blocks feature extraction models from learning more about spatial information. Second, the complexity of the model is further increased when multiple CNNs are used. Additionally, the author of Vision Transformers (ViTs) pointed out the inductive bias of CNN called the local receptive field. Therefore, we propose the full transformer structure that uses an end-to-end learning method for video captioning to overcome this problem. As a feature extraction model, we use a vision transformer (ViT) and propose feature extraction gates (FEGs) to enrich the input of the captioning model through that extraction model. Additionally, we design a universal encoder attraction (UEA) that uses all encoder layer outputs and performs self-attention on the outputs. The UEA is used to address the lack of information about the video’s temporal relationship because our method uses only the appearance feature. We will evaluate our model against several recent models on two benchmark datasets and show its competitive performance on MSRVTT/MSVD datasets. We show that the proposed model performed captioning using only a single feature, but in some cases, it was better than the others, which used several features.
format Online
Article
Text
id pubmed-9269373
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-92693732022-07-09 UAT: Universal Attention Transformer for Video Captioning Im, Heeju Choi, Yong-Suk Sensors (Basel) Article Video captioning via encoder–decoder structures is a successful sentence generation method. In addition, using various feature extraction networks for extracting multiple features to obtain multiple kinds of visual features in the encoding process is a standard method for improving model performance. Such feature extraction networks are weight-freezing states and are based on convolution neural networks (CNNs). However, these traditional feature extraction methods have some problems. First, when the feature extraction model is used in conjunction with freezing, additional learning of the feature extraction model is not possible by exploiting the backpropagation of the loss obtained from the video captioning training. Specifically, this blocks feature extraction models from learning more about spatial information. Second, the complexity of the model is further increased when multiple CNNs are used. Additionally, the author of Vision Transformers (ViTs) pointed out the inductive bias of CNN called the local receptive field. Therefore, we propose the full transformer structure that uses an end-to-end learning method for video captioning to overcome this problem. As a feature extraction model, we use a vision transformer (ViT) and propose feature extraction gates (FEGs) to enrich the input of the captioning model through that extraction model. Additionally, we design a universal encoder attraction (UEA) that uses all encoder layer outputs and performs self-attention on the outputs. The UEA is used to address the lack of information about the video’s temporal relationship because our method uses only the appearance feature. We will evaluate our model against several recent models on two benchmark datasets and show its competitive performance on MSRVTT/MSVD datasets. We show that the proposed model performed captioning using only a single feature, but in some cases, it was better than the others, which used several features. MDPI 2022-06-25 /pmc/articles/PMC9269373/ /pubmed/35808316 http://dx.doi.org/10.3390/s22134817 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Im, Heeju
Choi, Yong-Suk
UAT: Universal Attention Transformer for Video Captioning
title UAT: Universal Attention Transformer for Video Captioning
title_full UAT: Universal Attention Transformer for Video Captioning
title_fullStr UAT: Universal Attention Transformer for Video Captioning
title_full_unstemmed UAT: Universal Attention Transformer for Video Captioning
title_short UAT: Universal Attention Transformer for Video Captioning
title_sort uat: universal attention transformer for video captioning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9269373/
https://www.ncbi.nlm.nih.gov/pubmed/35808316
http://dx.doi.org/10.3390/s22134817
work_keys_str_mv AT imheeju uatuniversalattentiontransformerforvideocaptioning
AT choiyongsuk uatuniversalattentiontransformerforvideocaptioning