Cargando…

A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling

Given the features of a video, recurrent neural networks can be used to automatically generate a caption for the video. Existing methods for video captioning have at least three limitations. First, semantic information has been widely applied to boost the performance of video captioning models, but...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Haoran, Lin, Ke, Maye, Alexander, Li, Jianmin, Hu, Xiaolin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7805957/
https://www.ncbi.nlm.nih.gov/pubmed/33501293
http://dx.doi.org/10.3389/frobt.2020.475767
_version_ 1783636421704155136
author Chen, Haoran
Lin, Ke
Maye, Alexander
Li, Jianmin
Hu, Xiaolin
author_facet Chen, Haoran
Lin, Ke
Maye, Alexander
Li, Jianmin
Hu, Xiaolin
author_sort Chen, Haoran
collection PubMed
description Given the features of a video, recurrent neural networks can be used to automatically generate a caption for the video. Existing methods for video captioning have at least three limitations. First, semantic information has been widely applied to boost the performance of video captioning models, but existing networks often fail to provide meaningful semantic features. Second, the Teacher Forcing algorithm is often utilized to optimize video captioning models, but during training and inference, different strategies are applied to guide word generation, leading to poor performance. Third, current video captioning models are prone to generate relatively short captions that express video contents inappropriately. Toward resolving these three problems, we suggest three corresponding improvements. First of all, we propose a metric to compare the quality of semantic features, and utilize appropriate features as input for a semantic detection network (SDN) with adequate complexity in order to generate meaningful semantic features for videos. Then, we apply a scheduled sampling strategy that gradually transfers the training phase from a teacher-guided manner toward a more self-teaching manner. Finally, the ordinary logarithm probability loss function is leveraged by sentence length so that the inclination of generating short sentences is alleviated. Our model achieves better results than previous models on the YouTube2Text dataset and is competitive with the previous best model on the MSR-VTT dataset.
format Online
Article
Text
id pubmed-7805957
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78059572021-01-25 A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling Chen, Haoran Lin, Ke Maye, Alexander Li, Jianmin Hu, Xiaolin Front Robot AI Robotics and AI Given the features of a video, recurrent neural networks can be used to automatically generate a caption for the video. Existing methods for video captioning have at least three limitations. First, semantic information has been widely applied to boost the performance of video captioning models, but existing networks often fail to provide meaningful semantic features. Second, the Teacher Forcing algorithm is often utilized to optimize video captioning models, but during training and inference, different strategies are applied to guide word generation, leading to poor performance. Third, current video captioning models are prone to generate relatively short captions that express video contents inappropriately. Toward resolving these three problems, we suggest three corresponding improvements. First of all, we propose a metric to compare the quality of semantic features, and utilize appropriate features as input for a semantic detection network (SDN) with adequate complexity in order to generate meaningful semantic features for videos. Then, we apply a scheduled sampling strategy that gradually transfers the training phase from a teacher-guided manner toward a more self-teaching manner. Finally, the ordinary logarithm probability loss function is leveraged by sentence length so that the inclination of generating short sentences is alleviated. Our model achieves better results than previous models on the YouTube2Text dataset and is competitive with the previous best model on the MSR-VTT dataset. Frontiers Media S.A. 2020-09-30 /pmc/articles/PMC7805957/ /pubmed/33501293 http://dx.doi.org/10.3389/frobt.2020.475767 Text en Copyright © 2020 Chen, Lin, Maye, Li and Hu. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Chen, Haoran
Lin, Ke
Maye, Alexander
Li, Jianmin
Hu, Xiaolin
A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling
title A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling
title_full A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling
title_fullStr A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling
title_full_unstemmed A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling
title_short A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling
title_sort semantics-assisted video captioning model trained with scheduled sampling
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7805957/
https://www.ncbi.nlm.nih.gov/pubmed/33501293
http://dx.doi.org/10.3389/frobt.2020.475767
work_keys_str_mv AT chenhaoran asemanticsassistedvideocaptioningmodeltrainedwithscheduledsampling
AT linke asemanticsassistedvideocaptioningmodeltrainedwithscheduledsampling
AT mayealexander asemanticsassistedvideocaptioningmodeltrainedwithscheduledsampling
AT lijianmin asemanticsassistedvideocaptioningmodeltrainedwithscheduledsampling
AT huxiaolin asemanticsassistedvideocaptioningmodeltrainedwithscheduledsampling
AT chenhaoran semanticsassistedvideocaptioningmodeltrainedwithscheduledsampling
AT linke semanticsassistedvideocaptioningmodeltrainedwithscheduledsampling
AT mayealexander semanticsassistedvideocaptioningmodeltrainedwithscheduledsampling
AT lijianmin semanticsassistedvideocaptioningmodeltrainedwithscheduledsampling
AT huxiaolin semanticsassistedvideocaptioningmodeltrainedwithscheduledsampling